Monte Carlo Simulation of River Meander Modelling
NASA Astrophysics Data System (ADS)
Posner, A. J.; Duan, J. G.
2010-12-01
This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.
Modeling and Computer Simulation: Molecular Dynamics and Kinetic Monte Carlo
Wirth, B.D.; Caturla, M.J.; Diaz de la Rubia, T.
2000-10-10
Recent years have witnessed tremendous advances in the realistic multiscale simulation of complex physical phenomena, such as irradiation and aging effects of materials, made possible by the enormous progress achieved in computational physics for calculating reliable, yet tractable interatomic potentials and the vast improvements in computational power and parallel computing. As a result, computational materials science is emerging as an important complement to theory and experiment to provide fundamental materials science insight. This article describes the atomistic modeling techniques of molecular dynamics (MD) and kinetic Monte Carlo (KMC), and an example of their application to radiation damage production and accumulation in metals. It is important to note at the outset that the primary objective of atomistic computer simulation should be obtaining physical insight into atomic-level processes. Classical molecular dynamics is a powerful method for obtaining insight about the dynamics of physical processes that occur on relatively short time scales. Current computational capability allows treatment of atomic systems containing as many as 10{sup 9} atoms for times on the order of 100 ns (10{sup -7}s). The main limitation of classical MD simulation is the relatively short times accessible. Kinetic Monte Carlo provides the ability to reach macroscopic times by modeling diffusional processes and time-scales rather than individual atomic vibrations. Coupling MD and KMC has developed into a powerful, multiscale tool for the simulation of radiation damage in metals.
Monte Carlo simulations of lattice models for single polymer systems
Hsu, Hsiao-Ping
2014-10-28
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N∼O(10{sup 4}). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and √(10), we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Modeling focusing Gaussian beams in a turbid medium with Monte Carlo simulations.
Hokr, Brett H; Bixler, Joel N; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J; Yakovlev, Vladislav V; Scully, Marlan O
2015-04-01
Monte Carlo techniques are the gold standard for studying light propagation in turbid media. Traditional Monte Carlo techniques are unable to include wave effects, such as diffraction; thus, these methods are unsuitable for exploring focusing geometries where a significant ballistic component remains at the focal plane. Here, a method is presented for accurately simulating photon propagation at the focal plane, in the context of a traditional Monte Carlo simulation. This is accomplished by propagating ballistic photons along trajectories predicted by Gaussian optics until they undergo an initial scattering event, after which, they are propagated through the medium by a traditional Monte Carlo technique. Solving a known problem by building upon an existing Monte Carlo implementation allows this method to be easily implemented in a wide variety of existing Monte Carlo simulations, greatly improving the accuracy of those models for studying dynamics in a focusing geometry.
Modeling root-reinforcement with a Fiber-Bundle Model and Monte Carlo simulation
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper uses sensitivity analysis and a Fiber-Bundle Model (FBM) to examine assumptions underpinning root-reinforcement models. First, different methods for apportioning load between intact roots were investigated. Second, a Monte Carlo approach was used to simulate plants with heartroot, platero...
Modeling of hysteresis loops by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Nehme, Z.; Labaye, Y.; Sayed Hassan, R.; Yaacoub, N.; Greneche, J. M.
2015-12-01
Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nano)architectures with different anisotropy contributions.
Monte Carlo simulations of a model two-dimensional, two-patch colloidal particles
NASA Astrophysics Data System (ADS)
RŻysko, W.; Sokołowski, S.; Staszewski, T.
2015-08-01
We carried out Monte Carlo simulations of the two-patch colloids in two-dimensions. Similar model investigated theoretically in three-dimensions exhibited a re-entrant phase transition. Our simulations indicate that no re-entrant transition exists and the phase diagram for the system is of a swan-neck type and corresponds solely to the fluid-solid transition.
ERIC Educational Resources Information Center
Kim, Su-Young
2012-01-01
Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…
Monte Carlo simulation of CP sup N minus 1 models
Campostrini, M.; Rossi, P.; Vicari, E. )
1992-09-15
Numerical simulations of two-dimensional CP{sup {ital N}{minus}1} models are performed at {ital N}=2, 10, and 21. The lattice action adopted depends explicitly on the gauge degrees of freedom and shows precocious scaling. Our tests of scaling are the stability of adimensional physical quantities (second moment of the correlation function versus inverse mass gap, magnetic susceptibility versus square correlation length) and rotation invariance. Topological properties of the models are explored by measuring the topological susceptibility and by extracting the Abelian string tension. Several different (local and nonlocal) lattice definitions of topological charge are discussed and compared. The qualitative physical picture derived from the continuum 1/{ital N} expansion is confirmed, and agreement with quantitative 1/{ital N} predictions is satisfactory. Variant (Symanzik-improved) actions are considered in the CP{sup 1}{approx}O(3) case and agreement with universality and previous simulations (when comparable) is found. The simulation algorithm is an efficient picture of over-heat-bath and microcanonical algorithms. The dynamical features and critical exponents of the algorithm are discussed in detail.
Direct simulation Monte Carlo modeling of e-beam metal deposition
Venkattraman, A.; Alexeenko, A. A.
2010-07-15
Three-dimensional direct simulation Monte Carlo (DSMC) method is applied here to model the electron-beam physical vapor deposition of copper thin films. Various molecular models for copper-copper interactions have been considered and a suitable molecular model has been determined based on comparisons of dimensional mass fluxes obtained from simulations and previous experiments. The variable hard sphere model that is determined for atomic copper vapor can be used in DSMC simulations for design and analysis of vacuum deposition systems, allowing for accurate prediction of growth rates, uniformity, and microstructure.
Monte Carlo simulations of a model two-dimensional, two-patch colloidal particles.
Rżysko, W; Sokołowski, S; Staszewski, T
2015-08-14
We carried out Monte Carlo simulations of the two-patch colloids in two-dimensions. Similar model investigated theoretically in three-dimensions exhibited a re-entrant phase transition. Our simulations indicate that no re-entrant transition exists and the phase diagram for the system is of a swan-neck type and corresponds solely to the fluid-solid transition. PMID:26277147
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Monte Carlo simulation based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, Rizal; Waris, Abdul; Viridi, Sparisoma
2016-09-01
Nuclear fission has been modeled notoriously using two approaches method, macroscopic and microscopic. This work will propose another approach, where the nucleus is treated as a toy model. The aim is to see the usefulness of particle distribution in fission yield calculation. Inasmuch nucleus is a toy, then the Fission Toy Model (FTM) does not represent real process in nature completely. The fission event in FTM is represented by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. By adopting the nucleon density approximation, the Gaussian distribution is chosen as particle distribution. This distribution function generates random number that randomizes distance between particles and a central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. The yield is determined from portion of nuclei distribution which is proportional with portion of mass numbers. By using modified FTM, characteristic of particle distribution in each fission event could be formed before fission process. These characteristics could be used to make prediction about real nucleons interaction in fission process. The results of FTM calculation give information that the γ value seems as energy.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Monte Carlo simulation for light propagation in 3D tooth model
NASA Astrophysics Data System (ADS)
Fu, Yongji; Jacques, Steven L.
2011-03-01
Monte Carlo (MC) simulation was implemented in a three dimensional tooth model to simulate the light propagation in the tooth for antibiotic photodynamic therapy and other laser therapy. The goal of this research is to estimate the light energy deposition in the target region of tooth with given light source information, tooth optical properties and tooth structure. Two use cases were presented to demonstrate the practical application of this model. One case was comparing the isotropic point source and narrow beam dosage distribution and the other case was comparing different incident points for the same light source. This model will help the doctor for PDT design in the tooth.
NASA Astrophysics Data System (ADS)
Erdem, Riza; Aydiner, Ekrem
2009-03-01
Voltage-gated ion channels are key molecules for the generation and propagation of electrical signals in excitable cell membranes. The voltage-dependent switching of these channels between conducting and nonconducting states is a major factor in controlling the transmembrane voltage. In this study, a statistical mechanics model of these molecules has been discussed on the basis of a two-dimensional spin model. A new Hamiltonian and a new Monte Carlo simulation algorithm are introduced to simulate such a model. It was shown that the results well match the experimental data obtained from batrachotoxin-modified sodium channels in the squid giant axon using the cut-open axon technique.
Modeling weight variability in a pan coating process using Monte Carlo simulations.
Pandey, Preetanshu; Katakdaunde, Manoj; Turton, Richard
2006-10-06
The primary objective of the current study was to investigate process variables affecting weight gain mass coating variability (CV(m) ) in pan coating devices using novel video-imaging techniques and Monte Carlo simulations. Experimental information such as the tablet location, circulation time distribution, velocity distribution, projected surface area, and spray dynamics was the main input to the simulations. The data on the dynamics of tablet movement were obtained using novel video-imaging methods. The effects of pan speed, pan loading, tablet size, coating time, spray flux distribution, and spray area and shape were investigated. CV(m) was found to be inversely proportional to the square root of coating time. The spray shape was not found to affect the CV(m) of the process significantly, but an increase in the spray area led to lower CV(m) s. Coating experiments were conducted to verify the predictions from the Monte Carlo simulations, and the trends predicted from the model were in good agreement. It was observed that the Monte Carlo simulations underpredicted CV(m) s in comparison to the experiments. The model developed can provide a basis for adjustments in process parameters required during scale-up operations and can be useful in predicting the process changes that are needed to achieve the same CV(m) when a variable is altered.
Flat-histogram methods in quantum Monte Carlo simulations: Application to the t-J model
NASA Astrophysics Data System (ADS)
Diamantis, Nikolaos G.; Manousakis, Efstratios
2016-09-01
We discuss that flat-histogram techniques can be appropriately applied in the sampling of quantum Monte Carlo simulation in order to improve the statistical quality of the results at long imaginary time or low excitation energy. Typical imaginary-time correlation functions calculated in quantum Monte Carlo are subject to exponentially growing errors as the range of imaginary time grows and this smears the information on the low energy excitations. We show that we can extract the low energy physics by modifying the Monte Carlo sampling technique to one in which configurations which contribute to making the histogram of certain quantities flat are promoted. We apply the diagrammatic Monte Carlo (diag-MC) method to the motion of a single hole in the t-J model and we show that the implementation of flat-histogram techniques allows us to calculate the Green's function in a wide range of imaginary-time. In addition, we show that applying the flat-histogram technique alleviates the “sign”-problem associated with the simulation of the single-hole Green's function at long imaginary time.
Multicanonical Monte Carlo simulations of anisotropic SU(3) and SU(4) Heisenberg models
NASA Astrophysics Data System (ADS)
Harada, Kenji; Kawashima, Naoki; Troyer, Matthias
2009-03-01
We present the results of multicanonical Monte Carlo simulations on two-dimensional anisotropic SU(3) and SU(4) Heisenberg models. In our previous study [K. Harada, et al., J. Phys. Soc. Jpn. 76, 013703 (2007)], we found evidence for a direct quantum phase transition from the valence-bond-solid(VBS) phase to the SU(3) symmetry breaking phase on the SU(3) model and we proposed the possibility of deconfined critical phenomena (DCP) [T. Senthil, et al., Science 303, 1490 (2004); T. Grover and T. Senthil, Phys. Rev. Lett. 98, 247202 (2007)]. Here we will present new results with an improved algorithm, using a multicanonical Monte Carlo algorithm. Using a flow method-like technique [A.B. Kuklov, et al., Annals of Physics 321, 1602 (2006)], we discuss the possibility of DCP in both models.
NASA Astrophysics Data System (ADS)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-01
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
NASA Astrophysics Data System (ADS)
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.
Consistent Modeling of Hypersonic Nonequilibrium Flows using Direct Simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Zhang, Chonglin
Hypersonic flows involve strong thermal and chemical nonequilibrium due to steep gradients in gas properties in the shock layer, wake, and next to vehicle surfaces. Accurate simulation of hypersonic nonequilibrium flows requires consideration of the molecular nature of the gas including internal energy excitation (translational, rotational, and vibrational energy modes) as well as chemical reaction processes such as dissociation. Both continuum and particle simulation methods are available to simulate such complex flow phenomena. Specifically, the direct simulation Monte Carlo (DSMC) method is widely used to model such complex nonequilibrium phenomena within a particle-based numerical method. This thesis describes in detail how the different types of DSMC thermochemical models should be implemented in a rigorous and consistent manner. In the process, new algorithms are developed including a new framework for phenomenological models able to incorporate results from computational chemistry. Using this framework, a new DSMC model for rotational energy exchange is constructed. General algorithms are developed for the various types of methods that inherently satisfy microscopic reversibility, detailed balance, and equipartition of energy in equilibrium. Furthermore, a new framework for developing rovibrational state-to-state DSMC collision models is proposed, and a vibrational state-to-state model is developed along the course. The overall result of this thesis is a rigorous and consistent approach to bridge molecular physics and computational chemistry through stochastic molecular simulation to continuum models for gases in strong thermochemical nonequilibrium.
Monte Carlo Simulations for Radiobiology
NASA Astrophysics Data System (ADS)
Ackerman, Nicole; Bazalova, Magdalena; Chang, Kevin; Graves, Edward
2012-02-01
The relationship between tumor response and radiation is currently modeled as dose, quantified on the mm or cm scale through measurement or simulation. This does not take into account modern knowledge of cancer, including tissue heterogeneities and repair mechanisms. We perform Monte Carlo simulations utilizing Geant4 to model radiation treatment on a cellular scale. Biological measurements are correlated to simulated results, primarily the energy deposit in nuclear volumes. One application is modeling dose enhancement through the use of high-Z materials, such gold nanoparticles. The model matches in vitro data and predicts dose enhancement ratios for a variety of in vivo scenarios. This model shows promise for both treatment design and furthering our understanding of radiobiology.
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah; Kurniadi, Rizal
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Eged, Katalin; Kis, Zoltán; Voigt, Gabriele
2006-01-01
After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the "location factor method" and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison.
A geometrical model for the Monte Carlo simulation of the TrueBeam linac
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Sempau, J.; Fogliata, A.; Cozzi, L.; Sauerwein, W.; Brualla, L.
2015-06-01
Monte Carlo simulation of linear accelerators (linacs) depends on the accurate geometrical description of the linac head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files of the flattening-filter-free (FFF) beams tallied at a plane located just upstream of the jaws. Yet, Monte Carlo simulations based on third-party tallied phase spaces are subject to limitations. In this work, an experimentally based geometry developed for the simulation of the FFF beams of the Varian TrueBeam linac is presented. The Monte Carlo geometrical model of the TrueBeam linac uses information provided by Varian that reveals large similarities between the TrueBeam machine and the Clinac 2100 downstream of the jaws. Thus, the upper part of the TrueBeam linac was modeled by introducing modifications to the Varian Clinac 2100 linac geometry. The most important of these modifications is the replacement of the standard flattening filters by ad hoc thin filters. These filters were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6 MV and 10 MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements performed with a diode detector for radiation fields ranging from 3 × 3 to 40 × 40 cm2 at depths of maximum dose of 5 and 10 cm. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were performed for dose profiles obtained from Monte Carlo simulations using the phase-space files distributed by Varian for the TrueBeam linac as the sources of particles. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TrueBeam phase-space files for all fields and depths considered, except for the
A geometrical model for the Monte Carlo simulation of the TrueBeam linac.
Rodriguez, M; Sempau, J; Fogliata, A; Cozzi, L; Sauerwein, W; Brualla, L
2015-06-01
Monte Carlo simulation of linear accelerators (linacs) depends on the accurate geometrical description of the linac head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files of the flattening-filter-free (FFF) beams tallied at a plane located just upstream of the jaws. Yet, Monte Carlo simulations based on third-party tallied phase spaces are subject to limitations. In this work, an experimentally based geometry developed for the simulation of the FFF beams of the Varian TrueBeam linac is presented. The Monte Carlo geometrical model of the TrueBeam linac uses information provided by Varian that reveals large similarities between the TrueBeam machine and the Clinac 2100 downstream of the jaws. Thus, the upper part of the TrueBeam linac was modeled by introducing modifications to the Varian Clinac 2100 linac geometry. The most important of these modifications is the replacement of the standard flattening filters by ad hoc thin filters. These filters were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6 MV and 10 MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements performed with a diode detector for radiation fields ranging from 3 × 3 to 40 × 40 cm(2) at depths of maximum dose of 5 and 10 cm. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were performed for dose profiles obtained from Monte Carlo simulations using the phase-space files distributed by Varian for the TrueBeam linac as the sources of particles. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TrueBeam phase-space files for all fields and depths considered, except for
Evaluation of angular scattering models for electron-neutral collisions in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Janssen, J. F. J.; Pitchford, L. C.; Hagelaar, G. J. M.; van Dijk, J.
2016-10-01
In Monte Carlo simulations of electron transport through a neutral background gas, simplifying assumptions related to the shape of the angular distribution of electron-neutral scattering cross sections are usually made. This is mainly because full sets of differential scattering cross sections are rarely available. In this work simple models for angular scattering are compared to results from the recent quantum calculations of Zatsarinny and Bartschat for differential scattering cross sections (DCS’s) from zero to 200 eV in argon. These simple models represent in various ways an approach to forward scattering with increasing electron energy. The simple models are then used in Monte Carlo simulations of range, straggling, and backscatter of electrons emitted from a surface into a volume filled with a neutral gas. It is shown that the assumptions of isotropic elastic scattering and of forward scattering for the inelastic collision process yield results within a few percent of those calculated using the DCS’s of Zatsarinny and Bartschat. The quantities which were held constant in these comparisons are the elastic momentum transfer and total inelastic cross sections.
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Moulin, F.; Picaud, S.; Hoang, P. N. M.; Jedlovszky, P.
2007-10-01
The grand canonical Monte Carlo method is used to simulate the adsorption isotherms of water molecules on different types of model soot particles. The soot particles are modeled by graphite-type layers arranged in an onionlike structure that contains randomly distributed hydrophilic sites, such as OH and COOH groups. The calculated water adsorption isotherm at 298K exhibits different characteristic shapes depending both on the type and the location of the hydrophilic sites and also on the size of the pores inside the soot particle. The different shapes of the adsorption isotherms result from different ways of water aggregation in or/and around the soot particle. The present results show the very weak influence of the OH sites on the water adsorption process when compared to the COOH sites. The results of these simulations can help in interpreting the experimental isotherms of water adsorbed on aircraft soot.
NASA Astrophysics Data System (ADS)
Hobler, Gerhard; Bradley, R. Mark; Urbassek, Herbert M.
2016-05-01
Sigmund's model of spatially resolved sputtering is the underpinning of many models of nanoscale pattern formation induced by ion bombardment. It is based on three assumptions: (i) the number of sputtered atoms is proportional to the nuclear energy deposition (NED) near the surface, (ii) the NED distribution is independent of the orientation and shape of the solid surface and is identical to the one in an infinite medium, and (iii) the NED distribution in an infinite medium can be approximated by a Gaussian. We test the validity of these assumptions using Monte Carlo simulations of He, Ar, and Xe impacts on Si at energies of 2, 20, and 200 keV with incidence angles from perpendicular to grazing. We find that for the more commonly-employed beam parameters (Ar and Xe ions at 2 and 20 keV and nongrazing incidence), the Sigmund model's predictions are within a factor of 2 of the Monte Carlo results for the total sputter yield and the first two moments of the spatially resolved sputter yield. This is partly due to a compensation of errors introduced by assumptions (i) and (ii). The Sigmund model, however, does not describe the skewness of the spatially resolved sputter yield, which is almost always significant. The approximation is much poorer for He ions and/or high energies (200 keV). All three of Sigmund's assumptions break down at grazing incidence angles. In all cases, we discuss the origin of the deviations from Sigmund's model.
Iterative optimisation of Monte Carlo detector models using measurements and simulations
NASA Astrophysics Data System (ADS)
Marzocchi, O.; Leone, D.
2015-04-01
This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the "try and fail" approach typical of the prior techniques.
Optical model for port-wine stain skin and its Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Xu, Lanqing; Xiao, Zhengying; Chen, Rong; Wang, Ying
2008-12-01
Laser irradiation is the most acceptable therapy for PWS patient at present time. Its efficacy is highly dependent on the energy deposition rules in skin. To achieve optimal PWS treatment parameters a better understanding of light propagation in PWS skin is indispensable. Traditional Monte Carlo simulations using simple geometries such as planar layer tissue model can not provide energy deposition in the skin with enlarged blood vessels. In this paper the structure of normal skin and the pathological character of PWS skin was analyzed in detail and the true structure were simplified into a hybrid layered mathematical model to character two most important aspects of PWS skin: layered structure and overabundant dermal vessels. The basic laser-tissue interaction mechanisms in skin were investigated and the optical parameters of PWS skin tissue at the therapeutic wavelength. Monte Carlo (MC) based techniques were choused to calculate the energy deposition in the skin. Results can be used in choosing optical dosage. Further simulations can be used to predict optimal laser parameters to achieve high-efficacy laser treatment of PWS.
Monte-Carlo simulations of a coarse-grained model for α-oligothiophenes
NASA Astrophysics Data System (ADS)
Almutairi, Amani; Luettmer-Strathmann, Jutta
The interfacial layer of an organic semiconductor in contact with a metal electrode has important effects on the performance of thin-film devices. However, the structure of this layer is not easy to model. Oligothiophenes are small, π-conjugated molecules with applications in organic electronics that also serve as small-molecule models for polythiophenes. α-hexithiophene (6T) is a six-ring molecule, whose adsorption on noble metal surfaces has been studied extensively (see, e.g., Ref.). In this work, we develop a coarse-grained model for α-oligothiophenes. We describe the molecules as linear chains of bonded, discotic particles with Gay-Berne potential interactions between non-bonded ellipsoids. We perform Monte Carlo simulations to study the structure of isolated and adsorbed molecules
NASA Astrophysics Data System (ADS)
Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard
2016-03-01
Laser speckle imaging (LSI) enables measurement of relative blood flow in microvasculature and perfusion in tissues. To determine the impact of tissue optical properties and perfusion dynamics on speckle contrast, we developed a computational simulation of laser speckle contrast imaging. We used a discrete absorption-weighted Monte Carlo simulation to model the transport of light in tissue. We simulated optical excitation of a uniform flat light source and tracked the momentum transfer of photons as they propagated through a simulated tissue geometry. With knowledge of the probability distribution of momentum transfer occurring in various layers of the tissue, we calculated the expected laser speckle contrast arising with coherent excitation using both reflectance and transmission geometries. We simulated light transport in a single homogeneous tissue while independently varying either absorption (.001-100mm^-1), reduced scattering (.1-10mm^-1), or anisotropy (0.05-0.99) over a range of values relevant to blood and commonly imaged tissues. We observed that contrast decreased by 49% with an increase in optical scattering, and observed a 130% increase with absorption (exposure time = 1ms). We also explored how speckle contrast was affected by the depth (0-1mm) and flow speed (0-10mm/s) of a dynamic vascular inclusion. This model of speckle contrast is important to increase our understanding of how parameters such as perfusion dynamics, vessel depth, and tissue optical properties affect laser speckle imaging.
NASA Astrophysics Data System (ADS)
Bubnis, Gregory J.
Since their discovery 25 years ago, carbon fullerenes have been widely studied for their unique physicochemical properties and for applications including organic electronics and photovoltaics. For these applications it is highly desirable for crystalline fullerene thin films to spontaneously self-assemble on surfaces. Accordingly, many studies have functionalized fullerenes with the aim of tailoring their intermolecular interactions and controlling interactions with the solid substrate. The success of these rational design approaches hinges on the subtle interplay of intermolecular forces and molecule-substrate interactions. Molecular modeling is well-suited to studying these interactions by directly simulating self-assembly. In this work, we consider three different fullerene functionalization approaches and for each approach we carry out Monte Carlo simulations of the self-assembly process. In all cases, we use a "coarse-grained" molecular representation that preserves the dominant physical interactions between molecules and maximizes computational efficiency. The first approach we consider is the traditional gold-thiolate SAM (self-assembled monolayer) strategy which tethers molecules to a gold substrate via covalent sulfur-gold bonds. For this we study an asymmetric fullerene thiolate bridged by a phenyl group. Clusters of 40 molecules are simulated on the Au(111) substrate at different temperatures and surface coverage densities. Fullerenes and S atoms are found to compete for Au(111) surface sites, and this competition prevents self-assembly of highly ordered monolayers. Next, we investigate self-assembled monolayers formed by fullerenes with hydrogen-bonding carboxylic acid substituents. We consider five molecules with different dimensions and symmetries. Monte Carlo cooling simulations are used to find the most stable solid structures of clusters adsorbed to Au(111). The results show cases where fullerene-Au(111) attraction, fullerene close-packing, and
Bisaso, Kuteesa R; Mukonzo, Jackson K; Ette, Ene I
2015-11-01
The study was undertaken to develop a pharmacokinetic-pharmacodynamic model to characterize efavirenz-induced neuropsychologic impairment, given preexistent impairment, which can be used for the optimization of efavirenz therapy via Monte Carlo simulations. The modeling was performed with NONMEM 7.2. A 1-compartment pharmacokinetic model was fitted to efavirenz concentration data from 196 Ugandan patients treated with a 600-mg daily efavirenz dose. Pharmacokinetic parameters and area under the curve (AUC) were derived. Neuropsychologic evaluation of the patients was done at baseline and in week 2 of antiretroviral therapy. A discrete-time 2-state first-order Markov model was developed to describe neuropsychologic impairment. Efavirenz AUC, day 3 efavirenz trough concentration, and female sex increased the probability (P01) of neuropsychologic impairment. Efavirenz oral clearance (CL/F) increased the probability (P10) of resolution of preexistent neuropsychologic impairment. The predictive performance of the reduced (final) model, given the data, incorporating AUC on P01and CL /F on P10, showed that the model adequately characterized the neuropsychologic impairment observed with efavirenz therapy. Simulations with the developed model predicted a 7% overall reduction in neuropsychologic impairment probability at 450 mg of efavirenz. We recommend a reduction in efavirenz dose from 600 to 450 mg, because the 450-mg dose has been shown to produce sustained antiretroviral efficacy.
A review of Monte Carlo simulations for the Bose-Hubbard model with diagonal disorder
NASA Astrophysics Data System (ADS)
Pollet, Lode
2013-10-01
We review the physics of the Bose-Hubbard model with disorder in the chemical potential focusing on recently published analytical arguments in combination with quantum Monte Carlo simulations. Apart from the superfluid and Mott insulator phases that can occur in this system without disorder, disorder allows for an additional phase, called the Bose glass phase. The topology of the phase diagram is subject to strong theorems proving that the Bose Glass phase must intervene between the superfluid and the Mott insulator and implying a Griffiths transition between the Mott insulator and the Bose glass. The full phase diagrams in 3d and 2d are discussed, and we zoom in on the insensitivity of the transition line between the superfluid and the Bose glass in the close vicinity of the tip of the Mott insulator lobe. We briefly comment on the established and remaining questions in the 1d case, and give a short overview of numerical work on related models.
Monte Carlo simulation of aorta autofluorescence
NASA Astrophysics Data System (ADS)
Kuznetsova, A. A.; Pushkareva, A. E.
2016-08-01
Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.
NASA Astrophysics Data System (ADS)
Matsumoto, T.
2007-09-01
Monte Carlo simulations are performed to evaluate depth-dose distributions for possible treatment of cancers by boron neutron capture therapy (BNCT). The ICRU computational model of ADAM & EVA was used as a phantom to simulate tumors at a depth of 5 cm in central regions of the lungs, liver and pancreas. Tumors of the prostate and osteosarcoma were also centered at the depth of 4.5 and 2.5 cm in the phantom models. The epithermal neutron beam from a research reactor was the primary neutron source for the MCNP calculation of the depth-dose distributions in those cancer models. For brain tumor irradiations, the whole-body dose was also evaluated. The MCNP simulations suggested that a lethal dose of 50 Gy to the tumors can be achieved without reaching the tolerance dose of 25 Gy to normal tissue. The whole-body phantom calculations also showed that the BNCT could be applied for brain tumors without significant damage to whole-body organs.
3-D Direct Simulation Monte Carlo modeling of comet 67P/Churyumov-Gerasimenko
NASA Astrophysics Data System (ADS)
Liao, Y.; Su, C.; Finklenburg, S.; Rubin, M.; Ip, W.; Keller, H.; Knollenberg, J.; Kührt, E.; Lai, I.; Skorov, Y.; Thomas, N.; Wu, J.; Chen, Y.
2014-07-01
After deep-space hibernation, ESA's Rosetta spacecraft has been successfully woken up and obtained the first images of comet 67P /Churyumov-Gerasimenko (C-G) in March 2014. It is expected that Rosetta will rendezvous with comet 67P and start to observe the nucleus and coma of the comet in the middle of 2014. As the comet approaches the Sun, a significant increase in activity is expected. Our aim is to understand the physical processes in the coma with the help of modeling in order to interpret the resulting measurements and establish observational and data analysis strategies. DSMC (Direct Simulation Monte Carlo) [1] is a very powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow [2,3]. Comparisons between DSMC and fluid techniques have also been performed to establish the limits of these techniques [2,4]. The drawback with 3D DSMC is that it is computationally highly intensive and thus time consuming. However, the performance can be dramatically increased with parallel computing on Graphic Processor Units (GPUs) [5]. We have already studied a case with comet 9P/Tempel 1 where the Deep Impact observations were used to define the shape of the nucleus and the outflow was simulated with the DSMC approach [6,7]. For comet 67P, we intend to determine the gas flow field in the innermost coma and the surface outgassing properties from analyses of the flow field, to investigate dust acceleration by gas drag, and to compare with observations (including time variability). The boundary conditions are implemented with a nucleus shape model [8] and thermal models which are based on the surface heat-balance equation. Several different parameter sets have been investigated. The calculations have been performed using the PDSC^{++} (Parallel Direct Simulation Monte Carlo) code [9] developed by Wu and his coworkers [10-12]. Simulation tasks can be accomplished within 24
Experimental validation of a direct simulation by Monte Carlo molecular gas flow model
Shufflebotham, P.K.; Bartel, T.J.; Berney, B.
1995-07-01
The Sandia direct simulation Monte Carlo (DSMC) molecular/transition gas flow simulation code has significant potential as a computer-aided design tool for the design of vacuum systems in low pressure plasma processing equipment. The purpose of this work was to verify the accuracy of this code through direct comparison to experiment. To test the DSMC model, a fully instrumented, axisymmetric vacuum test cell was constructed, and spatially resolved pressure measurements made in N{sub 2} at flows from 50 to 500 sccm. In a ``blind`` test, the DSMC code was used to model the experimental conditions directly, and the results compared to the measurements. It was found that the model predicted all the experimental findings to a high degree of accuracy. Only one modeling issue was uncovered. The axisymmetric model showed localized low pressure spots along the axis next to surfaces. Although this artifact did not significantly alter the accuracy of the results, it did add noise to the axial data. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}
Wysong, Ingrid; Gimelshein, Sergey; Bondar, Yevgeniy; Ivanov, Mikhail
2014-04-15
Validation of three direct simulation Monte Carlo chemistry models—total collision energy, Quantum Kinetic, and Kuznetsov state specific (KSS)—is conducted through the comparison of calculated vibrational temperatures of molecular oxygen with measured values inside a normal shock wave. First, the 2D geometry and numerical approach used to simulate the shock experiments is verified. Next, two different vibrational relaxation models are validated by comparison with data for the M = 9.3 case where dissociation is small in the nonequilibrium region of the shock and with newly obtained thermal rates. Finally, the three chemistry model results are compared for M = 9.3 and 13.4 in the region where the vibrational temperature is greatly different from the rotational and translational temperature, and thus nonequilibrium dissociation is important. It is shown that the peak vibrational temperature is very sensitive to the initial nonequilibrium rate of reaction in the chemistry model and that the vibrationally favored KSS model is much closer to the measured peak, but the post-peak behavior indicates that some details of the model still need improvement.
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
Modeling of vision loss due to vitreous hemorrhage by Monte Carlo simulation.
Al-Saeed, Tarek A; El-Zaiat, Sayed Y
2014-08-01
Vitreous hemorrhage is the leaking of blood into the vitreous humor which results from different diseases. Vitreous hemorrhage leads to vision problems ranging from mild to severe cases in which blindness occurs. Since erythrocytes are the major scatterers in blood, we are modeling light propagation in vitreous humor with erythrocytes randomly distributed in it. We consider the total medium (vitreous humor plus erythrocytes) as a turbid medium and apply Monte Carlo simulation. Then, we calculate the parameters characterizing vision loss due to vitreous hemorrhage. This work shows that the increase of the volume fraction of erythrocytes results in a decrease of the total transmittance of the vitreous body and an increase in the radius of maximum transmittance, the width of the circular strip of bright area, and the radius of the shadow area.
Modeling the tight focusing of beams in absorbing media with Monte Carlo simulations.
Brandes, Arnd R; Elmaklizi, Ahmed; Akarçay, H Günhan; Kienle, Alwin
2014-01-01
A severe drawback to the scalar Monte Carlo (MC) method is the difficulty of introducing diffraction when simulating light propagation. This hinders, for instance, the accurate modeling of beams focused through microscope objectives, where the diffraction patterns in the focal plane are of great importance in various applications. Here, we propose to overcome this issue by means of a direct extinction method. In the MC simulations, the photon paths' initial positions are sampled from probability distributions which are calculated with a modified angular spectrum of the plane waves technique. We restricted our study to the two-dimensional case, and investigated the feasibility of our approach for absorbing yet nonscattering materials. We simulated the focusing of collimated beams with uniform profiles through microscope objectives. Our results were compared with those yielded by independent simulations using the finite-difference time-domain method. Very good agreement was achieved between the results of both methods, not only for the power distributions around the focal region including diffraction patterns, but also for the distribution of the energy flow (Poynting vector). PMID:25393966
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Kadoura, Ahmad; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim
2016-06-01
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6. PMID:27276951
Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim
2016-06-01
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ɛ, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
Parsons, Neal Levin, Deborah A.; Duin, Adri C. T. van; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N{sub 2}({sup 1}Σ{sub g}{sup +})-N{sub 2}({sup 1}Σ{sub g}{sup +}) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
a Test Particle Model for Monte Carlo Simulation of Plasma Transport Driven by Quasineutrality
NASA Astrophysics Data System (ADS)
Kuhl, Nelson M.
1995-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. We perform numerical experiments to validate a mathematical model of P. R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The simulations are made using a transport code written by O. Betancourt and M. Taylor, with changes to incorporate our case studies. We adopt a test particle model naturally suggested by the problem of tracking particles in plasma physics. The statistics due to collisions are modeled by a drift kinetic equation whose numerical solution is based on the Monte Carlo method of A. Boozer and G. Kuo -Petravic. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian. It is shown that details of the collision operator other than its dependence on the collision frequency and temperature matter little for transport, and the role of conservation of momentum is investigated. Exponential decay makes it possible to find the confinement times of both ions and electrons by high performance computing. Three -dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. We make a convergence study of the method, derive scaling laws that are in good agreement with predictions from experimental data, and present a comparison with the JET experiment.
Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections. PMID:25527935
ERIC Educational Resources Information Center
Curran, Patrick J.; Bollen, Kenneth A.; Paxton, Pamela; Kirby, James; Chen, Feinian
2002-01-01
Examined several hypotheses about the suitability of the noncentral chi square in applied research using Monte Carlo simulation experiments with seven sample sizes and three distinct model types, each with five specifications. Results show that, in general, for models with small to moderate misspecification, the noncentral chi-square is well…
Monte Carlo computer simulations of Venus equilibrium and global resurfacing models
NASA Technical Reports Server (NTRS)
Dawson, D. D.; Strom, R. G.; Schaber, G. G.
1992-01-01
Two models have been proposed for the resurfacing history of Venus: (1) equilibrium resurfacing and (2) global resurfacing. The equilibrium model consists of two cases: in case 1, areas less than or equal to 0.03 percent of the planet are spatially randomly resurfaced at intervals of less than or greater than 150,000 yr to produce the observed spatially random distribution of impact craters and average surface age of about 500 m.y.; and in case 2, areas greater than or equal to 10 percent of the planet are resurfaced at intervals of greater than or equal to 50 m.y. The global resurfacing model proposes that the entire planet was resurfaced about 500 m.y. ago, destroying the preexisting crater population and followed by significantly reduced volcanism and tectonism. The present crater population has accumulated since then with only 4 percent of the observed craters having been embayed by more recent lavas. To test the equilibrium resurfacing model we have run several Monte Carlo computer simulations for the two proposed cases. It is shown that the equilibrium resurfacing model is not a valid model for an explanation of the observed crater population characteristics or Venus' resurfacing history. The global resurfacing model is the most likely explanation for the characteristics of Venus' cratering record. The amount of resurfacing since that event, some 500 m.y. ago, can be estimated by a different type of Monte Carolo simulation. To date, our initial simulation has only considered the easiest case to implement. In this case, the volcanic events are randomly distributed across the entire planet and, therefore, contrary to observation, the flooded craters are also randomly distributed across the planet.
Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations
Douglass, Michael; Bezak, Eva; Penfold, Scott
2012-06-15
Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmore » temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.« less
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong Zhao, Xin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-03-07
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (∼2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. A small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. The results from our Monte Carlo simulations are consistent with available experimental results.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
Monte Carlo simulation of domain growth in the kinetic Ising model on the connection machine
NASA Astrophysics Data System (ADS)
Amar, Jacques G.; Sullivan, Francis
1989-10-01
A fast multispin algorithm for the Monte Carlo simulation of the two-dimensional spin-exchange kinetic Ising model, previously described by Sullivan and Mountain and used by Amar et al. has been adapted for use on the Connection Machine and applied as a first test in a calculation of domain growth. Features of the code include: (a) the use of demon bits, (b) the simulation of several runs simultaneously to improve the efficiency of the code, (c) the use of virtual processors to simulate easily and efficiently a larger system size, (d) the use of the (NEWS) grid for last communication between neighbouring processors and updating of boundary layers, (e) the implementation of an efficient random number generator much faster than that provided by Thinking Machines Corp., and (f) the use of the LISP function "funcall" to select which processors to update. Overall speed of the code when run on a (128x128) processor machine is about 130 million attempted spin-exchanges per second, about 9 times faster than the comparable code, using hardware vectorised-logic operations and 64-bit multispin coding on the Cyber 205. The same code can be used on a larger machine (65 536 processors) and should produce speeds in excess of 500 million attempted spin-exchanges per second.
Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun
2016-09-01
The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.
Modeling of composite latex particle morphology by off-lattice Monte Carlo simulation.
Duda, Yurko; Vázquez, Flavio
2005-02-01
Composite latex particles have shown a great range of applications such as paint resins, varnishes, water borne adhesives, impact modifiers, etc. The high-performance properties of this kind of materials may be explained in terms of a synergistical combination of two different polymers (usually a rubber and a thermoplastic). A great variety of composite latex particles with very different morphologies may be obtained by two-step emulsion polymerization processes. The formation of specific particle morphology depends on the chemical and physical nature of the monomers used during the synthesis, the process temperature, the reaction initiator, the surfactants, etc. Only a few models have been proposed to explain the appearance of the composite particle morphologies. These models have been based on the change of the interfacial energies during the synthesis. In this work, we present a new three-component model: Polymer blend (flexible and rigid chain particles) is dispersed in water by forming spherical cavities. Monte Carlo simulations of the model in two dimensions are used to determine the density distribution of chains and water molecules inside the suspended particle. This approach allows us to study the dependence of the morphology of the composite latex particles on the relative hydrophilicity and flexibility of the chain molecules as well as on their density and composition. It has been shown that our simple model is capable of reproducing the main features of the various morphologies observed in synthesis experiments.
NASA Astrophysics Data System (ADS)
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75–2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
NASA Astrophysics Data System (ADS)
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
NASA Astrophysics Data System (ADS)
Verhaegen, Frank; Liu, H. Helen
2001-02-01
In radiation therapy, new treatment modalities employing dynamic collimation and intensity modulation increase the complexity of dose calculation because a new dimension, time, has to be incorporated into the traditional three-dimensional problem. In this work, we investigated two classes of sampling technique to incorporate dynamic collimator motion in Monte Carlo simulation. The methods were initially evaluated for modelling enhanced dynamic wedges (EDWs) from Varian accelerators (Varian Medical Systems, Palo Alto, USA). In the position-probability-sampling or PPS method, a cumulative probability distribution function (CPDF) was computed for the collimator position, which could then be sampled during simulations. In the static-component-simulation or SCS method, a dynamic field is approximated by multiple static fields in a step-shoot fashion. The weights of the particles or the number of particles simulated for each component field are computed from the probability distribution function (PDF) of the collimator position. The CPDF and PDF were computed from the segmented treatment tables (STTs) for the EDWs. An output correction factor had to be applied in this calculation to account for the backscattered radiation affecting monitor chamber readings. Comparison of the phase-space data from the PPS method (with the step-shoot motion) with those from the SCS method showed excellent agreement. The accuracy of the PPS method was further verified from the agreement between the measured and calculated dose distributions. Compared to the SCS method, the PPS method is more automated and efficient from an operational point of view. The principle of the PPS method can be extended to simulate other dynamic motions, and in particular, intensity-modulated beams using multileaf collimators.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
NASA Astrophysics Data System (ADS)
Nielsen, Jens; d'Avezac, Mayeul; Hetherington, James; Stamatakis, Michail
2013-12-01
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.
Saito, Ikuo; Kobayashi, Makoto; Matsushita, Yasuyuki; Mori, Asuka; Kawasugi, Kaname; Saruta, Takao
2008-07-01
The objective of the present study was to analyze the cost-effectiveness of lifetime antihypertensive therapy with angiotensin II receptor blocker (ARB) monotherapy, calcium channel blocker (CCB) monotherapy, or ARB plus CCB (ARB+CCB) combination therapy in Japan. Based on the results of large-scale clinical trials and epidemiological data, we constructed a Markov model for patients with essential hypertension. Our Markov model comprised coronary heart disease (CHD), stroke, and progression of diabetic nephropathy submodels. Based on this model, analysis of the prognosis of each patient was repeatedly conducted by Monte Carlo simulation. The three treatment strategies were compared in hypothetical 55-year-old patients with systolic blood pressure (SBP) of 160 mmHg in the absence and presence of comorbid diabetes. Olmesartan medoxomil 20 mg/d was the ARB and azelnidipine 16 mg/d the CCB in our model. On-treatment SBP was assumed to be 125, 140, and 140 mmHg in the ARB+CCB, ARB alone, and CCB alone groups, respectively. Costs and quality-adjusted life years (QALYs) were discounted by 3%/year. The ARB+CCB group was the most cost-effective both in male and female patients with or without diabetes. In conclusion, ARB plus CCB combination therapy may be a more cost-effective lifetime antihypertensive strategy than monotherapy with either agent alone. PMID:18957808
A background error covariance model of significant wave height employing Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Guo, Yanyou; Hou, Yijun; Zhang, Chunmei; Yang, Jie
2012-09-01
The quality of background error statistics is one of the key components for successful assimilation of observations in a numerical model. The background error covariance (BEC) of ocean waves is generally estimated under an assumption that it is stationary over a period of time and uniform over a domain. However, error statistics are in fact functions of the physical processes governing the meteorological situation and vary with the wave condition. In this paper, we simulated the BEC of the significant wave height (SWH) employing Monte Carlo methods. An interesting result is that the BEC varies consistently with the mean wave direction (MWD). In the model domain, the BEC of the SWH decreases significantly when the MWD changes abruptly. A new BEC model of the SWH based on the correlation between the BEC and MWD was then developed. A case study of regional data assimilation was performed, where the SWH observations of buoy 22001 were used to assess the SWH hindcast. The results show that the new BEC model benefits wave prediction and allows reasonable approximations of anisotropy and inhomogeneous errors.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
2016-01-01
Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${r}_{F}^{2}$\\end{document}rF2-test, two permutation F-tests (including GlobalAncova), and a rotation z-test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data
Monte Carlo simulation of x-ray scatter based on patient model from digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Liu, Bob; Wu, Tao; Moore, Richard H.; Kopans, Daniel B.
2006-03-01
We are developing a breast specific scatter correction method for digital beast tomosynthesis (DBT). The 3D breast volume was initially reconstructed from 15 projection images acquired from a GE prototype tomosynthesis system without correction of scatter. The voxel values were mapped to the tissue compositions using various segmentation schemes. This voxelized digital breast model was entered into a Monte Carlo package simulating the prototype tomosynthesis system. One billion photons were generated from the x-ray source for each projection in the simulation and images of scattered photons were obtained. A primary only projection image was then produced by subtracting the scatter image from the corresponding original projection image which contains contributions from the both primary photons and scatter photons. The scatter free projection images were then used to reconstruct the 3D breast using the same algorithm. Compared with the uncorrected 3D image, the x-ray attenuation coefficients represented by the scatter-corrected 3D image are closer to those derived from the measurement data.
Monte Carlo simulations for a Lotka-type model with reactant surface diffusion and interactions.
Zvejnieks, G; Kuzovkov, V N
2001-05-01
The standard Lotka-type model, which was introduced for the first time by Mai et al. [J. Phys. A 30, 4171 (1997)] for a simplified description of autocatalytic surface reactions, is generalized here for a case of mobile and energetically interacting reactants. The mathematical formalism is proposed for determining the dependence of transition rates on the interaction energy (and temperature) for the general mathematical model, and the Lotka-type model, in particular. By means of Monte Carlo computer simulations, we have studied the impact of diffusion (with and without energetic interactions between reactants) on oscillatory properties of the A+B-->2B reaction. The diffusion leads to a desynchronization of oscillations and a subsequent decrease of oscillation amplitude. The energetic interaction between reactants has a dual effect depending on the type of mobile reactants. In the limiting case of mobile reactants B the repulsion results in a decrease of amplitudes. However, these amplitudes increase if reactants A are mobile and repulse each other. A simplified interpretation of the obtained results is given.
Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2010-01-01
In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it
Modeling of multi-band drift in nanowires using a full band Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Hathwar, Raghuraj; Saraniti, Marco; Goodnick, Stephen M.
2016-07-01
We report on a new numerical approach for multi-band drift within the context of full band Monte Carlo (FBMC) simulation and apply this to Si and InAs nanowires. The approach is based on the solution of the Krieger and Iafrate (KI) equations [J. B. Krieger and G. J. Iafrate, Phys. Rev. B 33, 5494 (1986)], which gives the probability of carriers undergoing interband transitions subject to an applied electric field. The KI equations are based on the solution of the time-dependent Schrödinger equation, and previous solutions of these equations have used Runge-Kutta (RK) methods to numerically solve the KI equations. This approach made the solution of the KI equations numerically expensive and was therefore only applied to a small part of the Brillouin zone (BZ). Here we discuss an alternate approach to the solution of the KI equations using the Magnus expansion (also known as "exponential perturbation theory"). This method is more accurate than the RK method as the solution lies on the exponential map and shares important qualitative properties with the exact solution such as the preservation of the unitary character of the time evolution operator. The solution of the KI equations is then incorporated through a modified FBMC free-flight drift routine and applied throughout the nanowire BZ. The importance of the multi-band drift model is then demonstrated for the case of Si and InAs nanowires by simulating a uniform field FBMC and analyzing the average carrier energies and carrier populations under high electric fields. Numerical simulations show that the average energy of the carriers under high electric field is significantly higher when multi-band drift is taken into consideration, due to the interband transitions allowing carriers to achieve higher energies.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value. PMID:27386264
Experimental verification of a Monte Carlo-based MLC simulation model for IMRT dose calculation
Tyagi, Neelam; Moran, Jean M.; Litzenberg, Dale W.; Bielajew, Alex F.; Fraass, Benedick A.; Chetty, Indrin J.
2007-02-15
Inter- and intra-leaf transmission and head scatter can play significant roles in intensity modulated radiation therapy (IMRT)-based treatment deliveries. In order to accurately calculate the dose in the IMRT planning process, it is therefore important that the detailed geometry of the multi-leaf collimator (MLC), in addition to other components in the accelerator treatment head, be accurately modeled. In this paper, we have used the Monte Carlo method (MC) to develop a comprehensive model of the Varian 120 leaf MLC and have compared it against measurements in homogeneous phantom geometries under different IMRT delivery circumstances. We have developed a geometry module within the DPM MC code to simulate the detailed MLC design and the collimating jaws. Tests consisting of leakage, leaf positioning and static MLC shapes were performed to verify the accuracy of transport within the MLC model. The calculations show agreement within 2% in the high dose region for both film and ion-chamber measurements for these static shapes. Clinical IMRT treatment plans for the breast [both segmental MLC (SMLC) and dynamic MLC (DMLC)], prostate (SMLC) and head and neck split fields (SMLC) were also calculated and compared with film measurements. Such a range of cases were chosen to investigate the accuracy of the model as a function of modulation in the beamlet pattern, beamlet width, and field size. The overall agreement is within 2%/2 mm of the film data for all IMRT beams except the head and neck split field, which showed differences up to 5% in the high dose regions. Various sources of uncertainties in these comparisons are discussed.
NASA Astrophysics Data System (ADS)
Nourazar, S. S.; Jahangiri, P.; Aboutalebi, A.; Ganjaei, A. A.; Nourazar, M.; Khadem, J.
2011-06-01
The effect of new terms in the improved algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, is investigated by simulating a rarefied binary gas mixture flow inside a rotating cylinder. Dalton law for the partial pressures contributed by each species of the binary gas mixture is incorporated into our simulation using the MDSMC method and the direct simulation Monte-Carlo (DSMC) method. Moreover, the effect of the exponent of the cosine of deflection angle (α) in the inter-molecular collision models, the variable soft sphere (VSS) and the variable hard sphere (VHS), is investigated in our simulation. The improvement of the results of simulation is pronounced using the MDSMC method when compared with the results of the DSMC method. The results of simulation using the VSS model show some improvements on the result of simulation for the mixture temperature at radial distances close to the cylinder wall where the temperature reaches the maximum value when compared with the results using the VHS model.
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
Dodds, Michael G; Vicini, Paolo
2004-09-01
Advances in computer hardware and the associated computer-intensive algorithms made feasible by these advances [like Markov chain Monte Carlo (MCMC) data analysis techniques] have made possible the application of hierarchical full Bayesian methods in analyzing pharmacokinetic and pharmacodynamic (PK-PD) data sets that are multivariate in nature. Pharmacokinetic data analysis in particular has been one area that has seized upon this technology to refine estimates of drug parameters from sparse data gathered in a large, highly variable population of patients. A drawback in this type of analysis is that it is difficult to quantitatively assess convergence of the Markov chains to a target distribution, and thus, it is sometimes difficult to assess the reliability of estimates gained from this procedure. Another complicating factor is that, although the application of MCMC methods to population PK-PD problems has been facilitated by new software designed for the PK-PD domain (specifically PKBUGS), experts in PK-PD may not have the necessary experience with MCMC methods to detect and understand problems with model convergence. The objective of this work is to provide an example of a set of diagnostics useful to investigators, by analyzing in detail three convergence criteria (namely the Raftery and Lewis, Geweke, and Heidelberger and Welch methods) on a simulated problem and with a rule of thumb of 10,000 chain elements in the Markov chain. We used two publicly available software packages to assess convergence of MCMC parameter estimates; the first performs Bayesian parameter estimation (PKBUGS/WinBUGS), and the second is focused on posterior analysis of estimates (BOA). The main message that seems to emerge is that accurately estimating confidence regions for the parameters of interest is more demanding than estimating the parameter means. Together, these tools provide numerical means by which an investigator can establish confidence in convergence and thus in the
Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos
2011-07-15
For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.
Huang, Chen-Hsi; Marian, Jaime
2016-10-26
We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term 'ABVI', incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found. PMID:27541350
NASA Astrophysics Data System (ADS)
Huang, Chen-Hsi; Marian, Jaime
2016-10-01
We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term ‘ABVI’, incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found.
Zhdanov, Vladimir P
2002-03-01
Discussing the effect of adsorbate-adsorbate lateral interactions on the kinetics of heterogeneous catalytic reactions, Zvejnieks and Kuzovkov [Phys. Rev. E 63, 051104 (2001)] conclude that in the case of adsorbed particles the Metropolis Monte Carlo dynamics is meaningless and propose to use their own dynamics, which is equivalent to the Glauber dynamics. In this Comment, I show that these and other conclusions and prescriptions by Zvejnieks and Kuzovkov are not in line with the general principles of simulations of rate processes in adsorbed overlayers.
Monte carlo simulations of organic photovoltaics.
Groves, Chris; Greenham, Neil C
2014-01-01
Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.
D. L. Kelly
2007-06-01
Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.
Schaefer, C.; Jansen, A. P. J.
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
NASA Astrophysics Data System (ADS)
Schaefer, C.; Jansen, A. P. J.
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan
2013-11-15
Purpose: The purpose of this study is to adapt an equivalent source model originally developed for conventional CT Monte Carlo dose quantification to the radiation oncology context and validate its application for evaluating concomitant dose incurred by a kilovoltage (kV) cone-beam CT (CBCT) system integrated into a linear accelerator.Methods: In order to properly characterize beams from the integrated kV CBCT system, the authors have adapted a previously developed equivalent source model consisting of an equivalent spectrum module that takes into account intrinsic filtration and an equivalent filter module characterizing the added bowtie filtration. An equivalent spectrum was generated for an 80, 100, and 125 kVp beam with beam energy characterized by half-value layer measurements. An equivalent filter description was generated from bowtie profile measurements for both the full- and half-bowtie. Equivalent source models for each combination of equivalent spectrum and filter were incorporated into the Monte Carlo software package MCNPX. Monte Carlo simulations were then validated against in-phantom measurements for both the radiographic and CBCT mode of operation of the kV CBCT system. Radiographic and CBCT imaging dose was measured for a variety of protocols at various locations within a body (32 cm in diameter) and head (16 cm in diameter) CTDI phantom. The in-phantom radiographic and CBCT dose was simulated at all measurement locations and converted to absolute dose using normalization factors calculated from air scan measurements and corresponding simulations. The simulated results were compared with the physical measurements and their discrepancies were assessed quantitatively.Results: Strong agreement was observed between in-phantom simulations and measurements. For the radiographic protocols, simulations uniformly underestimated measurements by 0.54%–5.14% (mean difference =−3.07%, SD = 1.60%). For the CBCT protocols, simulations uniformly underestimated
Buyukada, Musa
2016-09-01
Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process. PMID:27243606
Buyukada, Musa
2016-09-01
Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process.
A self-adjusted Monte Carlo simulation as a model for financial markets with central regulation
NASA Astrophysics Data System (ADS)
Horváth, Denis; Gmitra, Martin; Kuscsik, Zoltán
2006-03-01
Properties of the self-adjusted Monte Carlo algorithm applied to 2d Ising ferromagnet are studied numerically. The endogenous feedback form expressed in terms of the instant running averages is suggested in order to generate a biased random walk of the temperature that converges to criticality without an external tuning. The robustness of a stationary regime with respect to partial accessibility of the information is demonstrated. Several statistical and scaling aspects have been identified which allow to establish an alternative spin lattice model of the financial market. It turns out that our model alike model suggested by Bornholdt [Int. J. Mod. Phys. C 12 (2001) 667], may be described by Lévy-type stationary distribution of feedback variations with unique exponent α1∼3.3. However, the differences reflected by Hurst exponents suggest that resemblances between the studied models seem to be non-trivial.
NASA Astrophysics Data System (ADS)
Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei
2014-09-01
Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.
Wang, Shuang; Zhao, Jianhua; Lui, Harvey; He, Qingli; Bai, Jintao; Zeng, Haishan
2014-09-01
Raman photon generation inside human skin and escaping to skin surface were modeled in an eight-layered skin optical model. Intrinsic Raman spectra of different skin layers were determined by microscopy measurements of excised skin tissue sections. Monte Carlo simulation was used to study the excitation light distribution and intrinsic Raman signal distortion caused by tissue reabsorption and scattering during in vivo measurements. The simulation results demonstrated how different skin layers contributed to the observed in vivo Raman spectrum. Using the strongest Raman peak at 1445 cm(-1) as an example, the simulation suggested that the integrated contributions of the stratum corneum layer is 1.3%, the epidermis layer 28%, the dermis layer 70%, and the subcutaneous fat layer 1.1%. Reasonably good matching between the calculated spectrum and the measured in vivo Raman spectra was achieved, thus demonstrated great utility of our modeling method and approaches for help understanding the clinical measurements.
T. Oda; M. Shimada; K. Zhang; P. Calderoni; Y. Oya; M. Sokolov; R. Kolasinski
2011-11-01
The behavior of hydrogen isotopes implanted into tungsten containing vacancies was simulated using a Monte Carlo technique. The correlations between the distribution of implanted deuterium and fluence, trap density and trap distribution were evaluated. Throughout the present study, qualitatively understandable results were obtained. In order to improve the precision of the model and obtain quantitatively reliable results, it is necessary to deal with the following subjects: (1) how to balance long-time irradiation processes with a rapid diffusion process, (2) how to prevent unrealistic accumulation of hydrogen, and (3) how to model the release of hydrogen forcibly loaded into a region where hydrogen densely exist already.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
Models for direct Monte Carlo simulation of coupled vibration-dissociation
NASA Technical Reports Server (NTRS)
Haas, Brian L.; Boyd, Iain D.
1993-01-01
A new model for reactive collisions is developed within the framework of a particle method, which simulates coupled vibration-dissociation (CVD) behavior in high-temperature gases. The fundamental principles of particle simulation methods are introduced with particular attention given to the probability functions employed to select thermal and reactive collisions. Reaction probability functions are derived which favor vibrationally excited molecules as reaction candidates. The new models derived here are used to simulate CVD behavior during thermochemical relaxation of constant-volume O2 reservoirs, as well as the dissociation incubation behavior of postshock N2 flows for comparisons with previous models and experimental data.
Guérin, Bastein; Fakhri, Georges El
2008-01-01
We have developed and validated a realistic simulation of random coincidences, pixelated block detectors, light sharing among crystal elements and dead-time in 2D and 3D positron emission tomography (PET) imaging based on the SimSET Monte Carlo simulation software. Our simulation was validated by comparison to a Monte Carlo transport code widely used for PET modeling, GATE, and to measurements made on a PET scanner. METHODS: We have modified the SimSET software to allow independent tracking of single photons in the object and septa while taking advantage of existing voxel based attenuation and activity distributions and validated importance sampling techniques implemented in SimSET. For each single photon interacting in the detector, the energy-weighted average of interaction points was computed, a blurring model applied to account for light sharing and the associated crystal identified. Detector dead-time was modeled in every block as a function of the local single rate using a variance reduction technique. Electronic dead-time was modeled for the whole scanner as a function of the prompt coincidences rate. Energy spectra predicted by our simulation were compared to GATE. NEMA NU-2 2001 performance tests were simulated with the new simulation as well as with SimSET and compared to measurements made on a Discovery ST (DST) camera. RESULTS: Errors in simulated spatial resolution (full width at half maximum, FWHM) were 5.5% (6.1%) in 2D (3D) with the new simulation, compared with 42.5% (38.2%) with SimSET. Simulated (measured) scatter fractions were 17.8% (21.3%) in 2D and 45.8% (45.2%) in 3D. Simulated and measured sensitivities agreed within 2.3 % in 2D and 3D for all planes and simulated and acquired count rate curves (including NEC) were within 12.7% in 2D in the [0: 80 kBq/cc] range and in 3D in the [0: 35 kBq/cc] range. The new simulation yielded significantly more realistic singles' and coincidences' spectra, spatial resolution, global sensitivity and lesion
Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
Almarza, N G; Pȩkalski, J; Ciach, A
2014-04-28
The triangular lattice model with nearest-neighbor attraction and third-neighbor repulsion, introduced by Pȩkalski, Ciach, and Almarza [J. Chem. Phys. 140, 114701 (2014)] is studied by Monte Carlo simulation. Introduction of appropriate order parameters allowed us to construct a phase diagram, where different phases with patterns made of clusters, bubbles or stripes are thermodynamically stable. We observe, in particular, two distinct lamellar phases-the less ordered one with global orientational order and the more ordered one with both orientational and translational order. Our results concern spontaneous pattern formation on solid surfaces, fluid interfaces or membranes that is driven by competing interactions between adsorbing particles or molecules.
Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
NASA Astrophysics Data System (ADS)
Shang, Yu; Li, Ting; Chen, Lei; Lin, Yu; Toborek, Michal; Yu, Guoqiang
2014-05-01
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αDB) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αDB. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αDB (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: -5.3% to -18.0%) for different tissue models. Although adding random noises to DCS data resulted in αDB variations, the mean values of errors in extracting αDB were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αDB using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Monte Carlo simulations of fluid vesicles.
Sreeja, K K; Ipsen, John H; Sunil Kumar, P B
2015-07-15
Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations. PMID:26087479
Monte Carlo simulations of fluid vesicles
NASA Astrophysics Data System (ADS)
Sreeja, K. K.; Ipsen, John H.; Kumar, P. B. Sunil
2015-07-01
Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Turner, Adam C; Zhang, Di; Kim, Hyun J; DeMarco, John J; Cagnon, Chris H; Angel, Erin; Cody, Dianna D; Stevens, Donna M; Primak, Andrew N; McCollough, Cynthia H; McNitt-Gray, Michael F
2009-06-01
The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called "equivalent" source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL1 and HVL2) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL1 and HVL2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in
Al-Subeihi, Ala A A; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; van Bladeren, Peter J; Rietjens, Ivonne M C M; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1'-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1'-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1'-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1'-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1'-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1'-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment.
Al-Subeihi, Ala A A; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; van Bladeren, Peter J; Rietjens, Ivonne M C M; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1'-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1'-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1'-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1'-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1'-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1'-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. PMID:25549870
Lim, Sam L.; Markey, Mia K.; Tunnell, James W.
2013-01-01
Abstract. We present a Monte Carlo lookup table (MCLUT)-based inverse model for extracting optical properties from tissue-simulating phantoms. This model is valid for close source-detector separation and highly absorbing tissues. The MCLUT is based entirely on Monte Carlo simulation, which was implemented using a graphics processing unit. We used tissue-simulating phantoms to determine the accuracy of the MCLUT inverse model. Our results show strong agreement between extracted and expected optical properties, with errors rate of 1.74% for extracted reduced scattering values, 0.74% for extracted absorption values, and 2.42% for extracted hemoglobin concentration values. PMID:23455965
Hennessy, Ricky; Lim, Sam L; Markey, Mia K; Tunnell, James W
2013-03-01
We present a Monte Carlo lookup table (MCLUT)-based inverse model for extracting optical properties from tissue-simulating phantoms. This model is valid for close source-detector separation and highly absorbing tissues. The MCLUT is based entirely on Monte Carlo simulation, which was implemented using a graphics processing unit. We used tissue-simulating phantoms to determine the accuracy of the MCLUT inverse model. Our results show strong agreement between extracted and expected optical properties, with errors rate of 1.74% for extracted reduced scattering values, 0.74% for extracted absorption values, and 2.42% for extracted hemoglobin concentration values. PMID:23455965
Monte Carlo simulations of two-dimensional Hubbard models with string bond tensor-network states
NASA Astrophysics Data System (ADS)
Song, Jeong-Pil; Wee, Daehyun; Clay, R. T.
2015-03-01
We study charge- and spin-ordered states in the two-dimensional extended Hubbard model on a triangular lattice at 1/3 filling. While the nearest-neighbor Coulomb repulsion V induces charge-ordered states, the competition between on-site U and nearest-neighbor V interactions lead to quantum phase transitions to an antiferromagnetic spin-ordered phase with honeycomb charge order. In order to avoid the fermion sign problem and handle frustrations here we use quantum Monte Carlo methods with the string-bond tensor network ansatz for fermionic systems in two dimensions. We determine the phase boundaries of the several spin- and charge-ordered states and show a phase diagram in the on-site U and the nearest-neighbor V plane. The numerical accuracy of the method is compared with exact diagonalization results in terms of the size of matrices D. We also test the use of lattice symmetries to improve the string-bond ansatz. Work at Mississippi State University was supported by the US Department of Energy grant DE-FG02-06ER46315.
Reliability of Monte Carlo simulations in modeling neutron yields from a shielded fission source
NASA Astrophysics Data System (ADS)
McArthur, Matthew S.; Rees, Lawrence B.; Czirr, J. Bart
2016-08-01
Using the combination of a neutron-sensitive 6Li glass scintillator detector with a neutron-insensitive 7Li glass scintillator detector, we are able to make an accurate measurement of the capture rate of fission neutrons on 6Li. We used this detector with a 252Cf neutron source to measure the effects of both non-borated polyethylene and 5% borated polyethylene shielding on detection rates over a range of shielding thicknesses. Both of these measurements were compared with MCNP calculations to determine how well the calculations reproduced the measurements. When the source is highly shielded, the number of interactions experienced by each neutron prior to arriving at the detector is large, so it is important to compare Monte Carlo modeling with actual experimental measurements. MCNP reproduces the data fairly well, but it does generally underestimate detector efficiency both with and without polyethylene shielding. For non-borated polyethylene it underestimates the measured value by an average of 8%. This increases to an average of 11% for borated polyethylene.
Morton, April M; McManamay, Ryan A; Nagle, Nicholas N; Piburn, Jesse O; Stewart, Robert N; Surendran Nair, Sujithkumar
2016-01-01
Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
NASA Astrophysics Data System (ADS)
Dunne, Lawrence J.; Furgani, Akrem; Jalili, Sayed; Manos, George
2009-05-01
Adsorption isotherms have been computed by Monte-Carlo simulation for methane/carbon dioxide and ethane/carbon dioxide mixtures adsorbed in the zeolite silicalite. These isotherms show remarkable differences with the ethane/carbon dioxide mixtures displaying strong adsorption preference reversal at high coverage. To explain the differences in the Monte-Carlo mixture isotherms an exact matrix calculation of the statistical mechanics of a lattice model of mixture adsorption in zeolites has been made. The lattice model reproduces the essential features of the Monte-Carlo isotherms, enabling us to understand the differing adsorption behaviour of methane/carbon dioxide and ethane/carbon dioxide mixtures in zeolites.
Monte Carlo simulations of a scintillation camera using GATE: validation and application modelling.
Staelens, Steven; Strul, Daniel; Santin, Giovanni; Vandenberghe, Stefaan; Koole, Michel; D'Asseler, Yves; Lemahieu, Ignace; Van de Walle, Rik
2003-09-21
Geant4 application for tomographic emission (GATE) is a recently developed simulation platform based on Geant4, specifically designed for PET and SPECT studies. In this paper we present validation results of GATE based on the comparison of simulations against experimental data, acquired with a standard SPECT camera. The most important components of the scintillation camera were modelled. The photoelectric effect. Compton and Rayleigh scatter are included in the gamma transport process. Special attention was paid to the processes involved in the collimator: scatter, penetration and lead fluorescence. A LEHR and a MEGP collimator were modelled as closely as possible to their shape and dimensions. In the validation study, we compared the simulated and measured energy spectra of different isotopes: 99mTc, 22Na, 57Co and 67Ga. The sensitivity was evaluated by using sources at varying distances from the detector surface. Scatter component analysis was performed in different energy windows at different distances from the detector and for different attenuation geometries. Spatial resolution was evaluated using a 99mTc source at various distances. Overall results showed very good agreement between the acquisitions and the simulations. The clinical usefulness of GATE depends on its ability to use voxelized datasets. Therefore, a clinical extension was written so that digital patient data can be read in by the simulator as a source distribution or as an attenuating geometry. Following this validation we modelled two additional camera designs: the Beacon transmission device for attenuation correction and the Solstice scanner prototype with a rotating collimator. For the first setup a scatter analysis was performed and for the latter design. the simulated sensitivity results were compared against theoretical predictions. Both case studies demonstrated the flexibility and accuracy of GATE and exemplified its potential benefits in protocol optimization and in system design.
NASA Astrophysics Data System (ADS)
Rapini, M.; Dias, R. A.; Costa, B. V.
2007-01-01
Ultrathin magnetic films can be modeled as an anisotropic Heisenberg model with long-range dipolar interactions. It is believed that the phase diagram presents three phases: An ordered ferromagnetic phase (I), a phase characterized by a change from out-of-plane to in-plane in the magnetization (II), and a high-temperature paramagnetic phase (III). It is claimed that the border lines from phase I to III and II to III are of second order and from I to II is first order. In the present work we have performed a very careful Monte Carlo simulation of the model. Our results strongly support that the line separating phases II and III is of the BKT type.
Makeev, Alexei G; Kurkina, Elena S; Kevrekidis, Ioannis G
2012-06-01
Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.
NASA Astrophysics Data System (ADS)
Makeev, Alexei G.; Kurkina, Elena S.; Kevrekidis, Ioannis G.
2012-06-01
Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Chen, Dongsheng; Zeng, Nan; Wang, Yunfei; He, Honghui; Tuchin, Valery V; Ma, Hui
2016-08-01
We conducted Monte Carlo simulations based on anisotropic sclera-mimicking models to examine the polarization features in Mueller matrix polar decomposition (MMPD) parameters during the refractive index matching process, which is one of the major mechanisms of optical clearing. In a preliminary attempt, by changing the parameters of the models, wavelengths, and detection geometries, we demonstrate how the depolarization coefficient and retardance vary during the refractive index matching process and explain the polarization features using the average value and standard deviation of scattering numbers of the detected photons. We also study the depth-resolved polarization features during the gradual progression of the refractive index matching process. The results above indicate that the refractive index matching process increases the depth of polarization measurements and may lead to higher contrast between tissues of different anisotropies in deeper layers. MMPD-derived polarization parameters can characterize the refractive index matching process qualitatively. PMID:27240298
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%. PMID:26572554
NASA Astrophysics Data System (ADS)
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Giura, Stefano; Schoen, Martin
2014-08-01
We consider the phase behavior of a simple model of a liquid crystal by means of modified mean-field density-functional theory (MMF DFT) and Monte Carlo simulations in the grand canonical ensemble (GCEMC). The pairwise additive interactions between liquid-crystal molecules are modeled via a Lennard-Jones potential in which the attractive contribution depends on the orientation of the molecules. We derive the form of this orientation dependence through an expansion in terms of rotational invariants. Our MMF DFT predicts two topologically different phase diagrams. At weak to intermediate coupling of the orientation dependent attraction, there is a discontinuous isotropic-nematic liquid-liquid phase transition in addition to the gas-isotropic liquid one. In the limit of strong coupling, the gas-isotropic liquid critical point is suppressed in favor of a fluid- (gas- or isotropic-) nematic phase transition which is always discontinuous. By considering three representative isotherms in parallel GCEMC simulations, we confirm the general topology of the phase diagram predicted by MMF DFT at intermediate coupling strength. From the combined MMF DFT-GCEMC approach, we conclude that the isotropic-nematic phase transition is very weakly first order, thus confirming earlier computer simulation results for the same model [see M. Greschek and M. Schoen, Phys. Rev. E 83, 011704 (2011)].
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Discrete Diffusion Monte Carlo for grey Implicit Monte Carlo simulations.
Densmore, J. D.; Urbatsch, T. J.; Evans, T. M.; Buksas, M. W.
2005-01-01
Discrete Diffusion Monte Carlo (DDMC) is a hybrid transport-diffusion method for Monte Carlo simulations in diffusive media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Thus, DDMC produces accurate solutions while increasing the efficiency of the Monte Carlo calculation. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for grey Implicit Monte Carlo calculations. First, we employ a diffusion equation that is discretized in space but is continuous time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. In addition, we treat particles incident on an optically thick region using the asymptotic diffusion-limit boundary condition. This interface technique can produce accurate solutions even if the incident particles are distributed anisotropically in angle. Finally, we develop a method for estimating radiation momentum deposition during the DDMC simulation. With a set of numerical examples, we demonstrate the accuracy and efficiency of our improved DDMC method.
Development of Monte Carlo Capability for Orion Parachute Simulations
NASA Technical Reports Server (NTRS)
Moore, James W.
2011-01-01
Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in
McCreddin, A; Alam, M S; McNabola, A
2015-01-01
An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies.
NASA Astrophysics Data System (ADS)
Sinha, Indrajit; Mukherjee, Ashim K.
2014-03-01
The oxidation of CO on Pt-group metal surfaces has attracted widespread attention since a long time due to its interesting oscillatory kinetics and spatiotemporal behavior. The use of STM in conjunction with other experimental data has confirmed the validity of the surface reconstruction (SR) model under low pressure and the more recent surface oxide (SO) model which is possible under sub-atmospheric pressure conditions [1]. In the SR model the surface is periodically reconstructed below a certain low critical CO-coverage and this reconstruction is lifted above a second, higher critical CO-coverage. Alternatively the SO model proposes periodic switching between a low-reactivity metallic surface and a high-reactivity oxide surface. Here we present an overview of our recent kinetic Monte Carlo (KMC) simulation studies on the oscillatory kinetics of surface catalyzed CO oxidation. Different modifications of the lattice gas Ziff-Gulari-Barshad (ZGB) model have been utilized or proposed for this purpose. First we present the effect of desorption on the ZGB reactive to poisoned irreversible phase transition in the SR model. Next we discuss our recent research on KMC simulation of the SO model. The ZGB framework is utilized to propose a new model incorporating not only the standard Langmuir-Hinshelwood (LH) mechanism, but also introducing the Mars-van Krevelen (MvK) mechanism for the surface oxide phase [5]. Phase diagrams, which are plots between long time averages of various oscillating quantities against the normalized CO pressure, show two or three transitions depending on the CO coverage critical threshold (CT) value beyond which all adsorbed oxygen atoms are converted to surface oxide.
The Monte Carlo Simulation of a Model Microactuator Driven by Rarefied Gas Thermal Effects
NASA Astrophysics Data System (ADS)
He, Ying; Stefanov, S. K.; Ota, Masahiro
2008-12-01
A computational model of a rotating microactuator with a four blade rotor system immersed in a low density gas is analyzed by using DSMC method. The rotor system is rotated by a driving force of rarefied dynamic effects arising around each blade due to the temperature difference between front and rear blade sides created and held by a given source of local gas heating. Three models of microactuators with different design are considered: a rotor system with inclined under certain angle blades with different temperatures of the front and rear surface of each blade; a rotor system with inclined blades immersed in low density gas under temperature gradient held between two hot and cold plates, and finally, a rotor system with blades having a fancy surface with anisotropic reflection of the gas molecules. The DSMC analysis is performed at Knudsen numbers (based on the blade size) in the range Kn = 0.1-0.2 close to the experimentally established values, for which the rotation rate reached its maximum. The results obtained from the simulations convincingly demonstrate that the DSMC approach can be used for a qualitative and quantitative analysis of gas microrotor systems with different design.
Lee, C; Lin, H; Chao, T; Hsiao, I; Chuang, K
2015-06-15
Purpose: Predicted PET images on the basis of analytical filtering approach for proton range verification has been successful developed and validated using FLUKA Monte Carlo (MC) codes and phantom measurements. The purpose of the study is to validate the effectiveness of analytical filtering model for proton range verification on GATE/GEANT4 Monte Carlo simulation codes. Methods: In this study, we performed two experiments for validation of predicted β+-isotope by the analytical model with GATE/GEANT4 simulations. The first experiments to evaluate the accuracy of predicting β+-yields as a function of irradiated proton energies. In second experiment, we simulate homogeneous phantoms of different materials irradiated by a mono-energetic pencil-like proton beam. The results of filtered β+-yields distributions by the analytical model is compared with those of MC simulated β+-yields in proximal and distal fall-off ranges. Results: The results investigate the distribution between filtered β+-yields and MC simulated β+-yields distribution in different conditions. First, we found that the analytical filtering can be applied over the whole range of the therapeutic energies. Second, the range difference between filtered β+-yields and MC simulated β+-yields at the distal fall-off region are within 1.5mm for all materials used. The findings validated the usefulness of analytical filtering model on range verification of proton therapy on GATE Monte Carlo simulations. In addition, there is a larger discrepancy between filtered prediction and MC simulated β+-yields using GATE code, especially in proximal region. This discrepancy might Result from the absence of wellestablished theoretical models for predicting the nuclear interactions. Conclusion: Despite the fact that large discrepancies of the distributions between MC-simulated and predicted β+-yields were observed, the study prove the effectiveness of analytical filtering model for proton range verification using
Baran, Timothy M.; Foster, Thomas H.
2011-01-01
We present a new Monte Carlo model of cylindrical diffusing fibers that is implemented with a graphics processing unit. Unlike previously published models that approximate the diffuser as a linear array of point sources, this model is based on the construction of these fibers. This allows for accurate determination of fluence distributions and modeling of fluorescence generation and collection. We demonstrate that our model generates fluence profiles similar to a linear array of point sources, but reveals axially heterogeneous fluorescence detection. With axially homogeneous excitation fluence, approximately 90% of detected fluorescence is collected by the proximal third of the diffuser for μs'/μa = 8 in the tissue and 70 to 88% is collected in this region for μs'/μa = 80. Increased fluorescence detection by the distal end of the diffuser relative to the center section is also demonstrated. Validation of these results was performed by creating phantoms consisting of layered fluorescent regions. Diffusers were inserted into these layered phantoms and fluorescence spectra were collected. Fits to these spectra show quantitative agreement between simulated fluorescence collection sensitivities and experimental results. These results will be applicable to the use of diffusers as detectors for dosimetry in interstitial photodynamic therapy. PMID:21895311
Litaize, O.; Serot, O.
2010-11-15
A Monte Carlo simulation of the fission fragment deexcitation process was developed in order to analyze and predict postfission-related nuclear data which are of crucial importance for basic and applied nuclear physics. The basic ideas of such a simulation were already developed in the past. In the present work, a refined model is proposed in order to make a reliable description of the distributions related to fission fragments as well as to prompt neutron and {gamma} energies and multiplicities. This refined model is mainly based on a mass-dependent temperature ratio law used for the initial excitation energy partition of the fission fragments and a spin-dependent excitation energy limit for neutron emission. These phenomenological improvements allow us to reproduce with a good agreement the {sup 252}Cf(sf) experimental data on prompt fission neutron multiplicity {nu}(A), {nu}(TKE), the neutron multiplicity distribution P({nu}), as well as their energy spectra N(E), and lastly the energy release in fission.
NASA Astrophysics Data System (ADS)
Li, Jun; Calo, Victor M.
2013-09-01
We present a single-particle Lennard-Jones (L-J) model for CO2 and N2. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO2 and N2 agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH4, CO2 and N2 are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available.
NASA Astrophysics Data System (ADS)
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Zhou, X. W.; Yang, N. Y. C.
2014-03-14
Electronic properties of semiconductor devices are sensitive to defects such as second phase precipitates, grain sizes, and voids. These defects can evolve over time especially under oxidation environments and it is therefore important to understand the resulting aging behavior in order for the reliable applications of devices. In this paper, we propose a kinetic Monte Carlo framework capable of simultaneous simulation of the evolution of second phases, precipitates, grain sizes, and voids in complicated systems involving many species including oxygen. This kinetic Monte Carlo model calculates the energy barriers of various events based directly on the experimental data. As a first step of our model implementation, we incorporate the second phase formation module in the parallel kinetic Monte Carlo codes SPPARKS. Selected aging simulations are performed to examine the formation of second phase precipitates at the eletroplated Au/Bi{sub 2}Te{sub 3} interface under oxygen and oxygen-free environments, and the results are compared with the corresponding experiments.
A Monte Carlo pencil beam scanning model for proton treatment plan simulation using GATE/GEANT4.
Grevillot, L; Bertrand, D; Dessy, F; Freud, N; Sarrut, D
2011-08-21
This work proposes a generic method for modeling scanned ion beam delivery systems, without simulation of the treatment nozzle and based exclusively on beam data library (BDL) measurements required for treatment planning systems (TPS). To this aim, new tools dedicated to treatment plan simulation were implemented in the Gate Monte Carlo platform. The method was applied to a dedicated nozzle from IBA for proton pencil beam scanning delivery. Optical and energy parameters of the system were modeled using a set of proton depth-dose profiles and spot sizes measured at 27 therapeutic energies. For further validation of the beam model, specific 2D and 3D plans were produced and then measured with appropriate dosimetric tools. Dose contributions from secondary particles produced by nuclear interactions were also investigated using field size factor experiments. Pristine Bragg peaks were reproduced with 0.7 mm range and 0.2 mm spot size accuracy. A 32 cm range spread-out Bragg peak with 10 cm modulation was reproduced with 0.8 mm range accuracy and a maximum point-to-point dose difference of less than 2%. A 2D test pattern consisting of a combination of homogeneous and high-gradient dose regions passed a 2%/2 mm gamma index comparison for 97% of the points. In conclusion, the generic modeling method proposed for scanned ion beam delivery systems was applicable to an IBA proton therapy system. The key advantage of the method is that it only requires BDL measurements of the system. The validation tests performed so far demonstrated that the beam model achieves clinical performance, paving the way for further studies toward TPS benchmarking. The method involves new sources that are available in the new Gate release V6.1 and could be further applied to other particle therapy systems delivering protons or other types of ions like carbon.
Titt, U; Sahoo, N; Ding, X; Zheng, Y; Newhauser, W D; Zhu, X R; Polf, J C; Gillin, M T; Mohan, R
2014-01-01
In recent years, the Monte Carlo method has been used in a large number of research studies in radiation therapy. For applications such as treatment planning, it is essential to validate the dosimetric accuracy of the Monte Carlo simulations in heterogeneous media. The AAPM Report no 105 addresses issues concerning clinical implementation of Monte Carlo based treatment planning for photon and electron beams, however for proton-therapy planning, such guidance is not yet available. Here we present the results of our validation of the Monte Carlo model of the double scattering system used at our Proton Therapy Center in Houston. In this study, we compared Monte Carlo simulated depth doses and lateral profiles to measured data for a magnitude of beam parameters. We varied simulated proton energies and widths of the spread-out Bragg peaks, and compared them to measurements obtained during the commissioning phase of the Proton Therapy Center in Houston. Of 191 simulated data sets, 189 agreed with measured data sets to within 3% of the maximum dose difference and within 3 mm of the maximum range or penumbra size difference. The two simulated data sets that did not agree with the measured data sets were in the distal falloff of the measured dose distribution, where large dose gradients potentially produce large differences on the basis of minute changes in the beam steering. Hence, the Monte Carlo models of medium- and large-size double scattering proton-therapy nozzles were valid for proton beams in the 100 MeV–250 MeV interval. PMID:18670050
NASA Astrophysics Data System (ADS)
Antanasijević, Davor; Pocajt, Viktor; Perić-Grujić, Aleksandra; Ristić, Mirjana
2014-11-01
This paper describes the training, validation, testing and uncertainty analysis of general regression neural network (GRNN) models for the forecasting of dissolved oxygen (DO) in the Danube River. The main objectives of this work were to determine the optimum data normalization and input selection techniques, the determination of the relative importance of uncertainty in different input variables, as well as the uncertainty analysis of model results using the Monte Carlo Simulation (MCS) technique. Min-max, median, z-score, sigmoid and tanh were validated as normalization techniques, whilst the variance inflation factor, correlation analysis and genetic algorithm were tested as input selection techniques. As inputs, the GRNN models used 19 water quality variables, measured in the river water each month at 17 different sites over a period of 9 years. The best results were obtained using min-max normalized data and the input selection based on the correlation between DO and dependent variables, which provided the most accurate GRNN model, and in combination the smallest number of inputs: Temperature, pH, HCO3-, SO42-, NO3-N, Hardness, Na, Cl-, Conductivity and Alkalinity. The results show that the correlation coefficient between measured and predicted DO values is 0.85. The inputs with the greatest effect on the GRNN model (arranged in descending order) were T, pH, HCO3-, SO42- and NO3-N. Of all inputs, variability of temperature had the greatest influence on the variability of DO content in river body, with the DO decreasing at a rate similar to the theoretical DO decreasing rate relating to temperature. The uncertainty analysis of the model results demonstrate that the GRNN can effectively forecast the DO content, since the distribution of model results are very similar to the corresponding distribution of real data.
NASA Astrophysics Data System (ADS)
Koda, Jun; Shapiro, P. R.
2007-12-01
Self-interacting dark matter (SIDM) has been proposed to solve the cuspy core problem of dark matter halos in standard CDM. There are two ways to investigate the effect of the 2-body, non-gravitational, elastic collisions of SIDM, Monte-Carlo N-body simulation and a conducting fluid model. The former is a gravitational N-body simulation with a Monte Carlo algorithm for the SIDM scattering that changes the direction of N-body particles randomly according to a given scattering cross section. The latter is a system of fluid conservation equations with a thermal conduction that describes the collisional effect, which was originally invented to describe the gravothermal collapse of globular clusters. Our previous work found a significant disagreement as regards the strength of collisionality required to solve cuspy core problem. However the two methods have not been properly tested against each other. Here, we make direct comparisons between Monte Carlo N-body simulations and analytic and numerical solutions of the conducting fluid (gaseous) model, for various isolated self-interacting dark matter halos. The N-body simulations reproduce the analytical self-similar solution of gravothermal collapse in the fluid model when one free parameter, the coefficient of heat conduction C, is chosen to be 0.75. The gravothermal collapse results of the simulations agrees well with our 1D numerical hydro solutions of the fluid model within 20% for other initial conditions, including Plummer model, Hernquist profile and NFW profile. In conclusion the conducting fluid model is in reasonably good agreement with the Monte Carlo simulations for isolated halos. We will pursue the origin of the reported disagreement between two methods in a cosmological environment by comparing new N-body simulations with fully cosmological initial conditions.
NASA Astrophysics Data System (ADS)
Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios
2010-12-01
We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.
NASA Astrophysics Data System (ADS)
Patrone, Paul; Einstein, T. L.; Margetis, Dionisios
2011-03-01
We study a 1+1D, stochastic, Burton-Cabrera-Frank (BCF) model of interacting steps fluctuating on a vicinal crystal. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. Our goal is to formulate and validate a self-consistent mean-field (MF) formalism to approximately solve the system of coupled, nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. We derive formulas for the time-dependent terrace width distribution (TWD) and its steady-state limit. By comparison with kinetic Monte-Carlo simulations, we show that our MF formalism improves upon models in which step interactions are linearized. We also indicate how fitting parameters of our steady state MF TWD may be used to determine the mass transport regime and step interaction energy of certain experimental systems. PP and TLE supported by NSF MRSEC under Grant DMR 05-20471 at U. of Maryland; DM supported by NSF under Grant DMS 08-47587.
Biopolymer structure simulation and optimization via fragment regrowth Monte Carlo.
Zhang, Jinfeng; Kou, S C; Liu, Jun S
2007-06-14
An efficient exploration of the configuration space of a biopolymer is essential for its structure modeling and prediction. In this study, the authors propose a new Monte Carlo method, fragment regrowth via energy-guided sequential sampling (FRESS), which incorporates the idea of multigrid Monte Carlo into the framework of configurational-bias Monte Carlo and is suitable for chain polymer simulations. As a by-product, the authors also found a novel extension of the Metropolis Monte Carlo framework applicable to all Monte Carlo computations. They tested FRESS on hydrophobic-hydrophilic (HP) protein folding models in both two and three dimensions. For the benchmark sequences, FRESS not only found all the minimum energies obtained by previous studies with substantially less computation time but also found new lower energies for all the three-dimensional HP models with sequence length longer than 80 residues.
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Kolmann, Stephen J.; D'Arcy, Jordan H.; Crittenden, Deborah L.; Jordan, Meredith J. T.
2015-11-01
Finite temperature quantum and anharmonic effects are studied in H2-Li+-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li+-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li+-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol-1, respectively.
Lindoy, Lachlan P; Kolmann, Stephen J; D'Arcy, Jordan H; Crittenden, Deborah L; Jordan, Meredith J T
2015-11-21
Finite temperature quantum and anharmonic effects are studied in H2-Li(+)-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li(+)-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling-coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li(+)-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol(-1), respectively.
Lindoy, Lachlan P; Kolmann, Stephen J; D'Arcy, Jordan H; Crittenden, Deborah L; Jordan, Meredith J T
2015-11-21
Finite temperature quantum and anharmonic effects are studied in H2-Li(+)-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li(+)-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling-coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li(+)-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol(-1), respectively. PMID:26590532
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
Monte Carlo Simulation Methods for Computing Liquid-Vapor Saturation Properties of Model Systems.
Rane, Kaustubh S; Murali, Sabharish; Errington, Jeffrey R
2013-06-11
We discuss molecular simulation methods for computing the phase coexistence properties of complex molecules. The strategies that we pursue are histogram-based approaches in which thermodynamic properties are related to relevant probability distributions. We first outline grand canonical and isothermal-isobaric methods for directly locating a saturation point at a given temperature. In the former case, we show how reservoir and growth expanded ensemble techniques can be used to facilitate the creation and insertion of complex molecules within a grand canonical simulation. We next focus on grand canonical and isothermal-isobaric temperature expanded ensemble techniques that provide a means to trace saturation lines over a wide range of temperatures. To demonstrate the utility of the strategies introduced here, we present phase coexistence data for a series of molecules, including n-octane, cyclohexane, water, 1-propanol, squalane, and pyrene. Overall, we find the direct grand canonical approach to be the most effective means to directly locate a coexistence point at a given temperature and the isothermal-isobaric temperature expanded ensemble scheme to provide the most effective means to follow a saturation curve to low temperature.
Yu Maolin; Du, R.
2005-08-05
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.
Monte Carlo Simulation of Endlinking Oligomers
NASA Technical Reports Server (NTRS)
Hinkley, Jeffrey A.; Young, Jennifer A.
1998-01-01
This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.
Monte Carlo simulation of a quantized universe.
NASA Astrophysics Data System (ADS)
Berger, Beverly K.
1988-08-01
A Monte Carlo simulation method which yields groundstate wave functions for multielectron atoms is applied to quantized cosmological models. In quantum mechanics, the propagator for the Schrödinger equation reduces to the absolute value squared of the groundstate wave function in the limit of infinite Euclidean time. The wave function of the universe as the solution to the Wheeler-DeWitt equation may be regarded as the zero energy mode of a Schrödinger equation in coordinate time. The simulation evaluates the path integral formulation of the propagator by constructing a large number of paths and computing their contribution to the path integral using the Metropolis algorithm to drive the paths toward a global minimum in the path energy. The result agrees with a solution to the Wheeler-DeWitt equation which has the characteristics of a nodeless groundstate wave function. Oscillatory behavior cannot be reproduced although the simulation results may be physically reasonable. The primary advantage of the simulations is that they may easily be extended to cosmologies with many degrees of freedom. Examples with one, two, and three degrees of freedom (d.f.) are presented.
NASA Astrophysics Data System (ADS)
Jalayer, Fatemeh; Ebrahimian, Hossein
2014-05-01
Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the
Monte Carlo simulations of phosphate polyhedron connectivity in glasses
ALAM,TODD M.
2000-01-01
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses
ALAM,TODD M.
1999-12-21
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Advances in Monte Carlo computer simulation
NASA Astrophysics Data System (ADS)
Swendsen, Robert H.
2011-03-01
Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.
Monte Carlo Simulation of Counting Experiments.
ERIC Educational Resources Information Center
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Monte Carlo simulation of electrons in dense gases
NASA Astrophysics Data System (ADS)
Tattersall, Wade; Boyle, Greg; Cocks, Daniel; Buckman, Stephen; White, Ron
2014-10-01
We implement a Monte-Carlo simulation modelling the transport of electrons and positrons in dense gases and liquids, by using a dynamic structure factor that allows us to construct structure-modified effective cross sections. These account for the coherent effects caused by interactions with the relatively dense medium. The dynamic structure factor also allows us to model thermal gases in the same manner, without needing to directly sample the velocities of the neutral particles. We present the results of a series of Monte Carlo simulations that verify and apply this new technique, and make comparisons with macroscopic predictions and Boltzmann equation solutions. Financial support of the Australian Research Council.
Realistic Monte Carlo Simulation of PEN Apparatus
NASA Astrophysics Data System (ADS)
Glaser, Charles; PEN Collaboration
2015-04-01
The PEN collaboration undertook to measure the π+ -->e+νe(γ) branching ratio with a relative uncertainty of 5 ×10-4 or less at the Paul Scherrer Institute. This observable is highly susceptible to small non V - A contributions, i.e, non-Standard Model physics. The detector system included a beam counter, mini TPC for beam tracking, an active degrader and stopping target, MWPCs and a plastic scintillator hodoscope for particle tracking and identification, and a spherical CsI EM calorimeter. GEANT 4 Monte Carlo simulation is integral to the analysis as it is used to generate fully realistic events for all pion and muon decay channels. The simulated events are constructed so as to match the pion beam profiles, divergence, and momentum distribution. Ensuring the placement of individual detector components at the sub-millimeter level and proper construction of active target waveforms and associated noise, enables us to more fully understand temporal and geometrical acceptances as well as energy, time, and positional resolutions and calibrations in the detector system. This ultimately leads to reliable discrimination of background events, thereby improving cut based or multivariate branching ratio extraction. Work supported by NSF Grants PHY-0970013, 1307328, and others.
A multicomb variance reduction scheme for Monte Carlo semiconductor simulators
Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.
1998-04-01
The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.
Monte Carlo simulations of lattice gauge theories
Rebbi, C
1980-02-01
Monte Carlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of Monte Carlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few Monte Carlo iterations and the internal energy is measured; mixed-phase runs, where several Monte Carlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)
NASA Astrophysics Data System (ADS)
Cassidy, Jeffrey; Betz, Vaughn; Lilge, Lothar
2015-02-01
Monte Carlo (MC) simulation is recognized as the “gold standard” for biophotonic simulation, capturing all relevant physics and material properties at the perceived cost of high computing demands. Tetrahedral-mesh-based MC simulations particularly are attractive due to the ability to refine the mesh at will to conform to complicated geometries or user-defined resolution requirements. Since no approximations of material or light-source properties are required, MC methods are applicable to the broadest set of biophotonic simulation problems. MC methods also have other implementation features including inherent parallelism, and permit a continuously-variable quality-runtime tradeoff. We demonstrate here a complete MC-based prospective fluence dose evaluation system for interstitial PDT to generate dose-volume histograms on a tetrahedral mesh geometry description. To our knowledge, this is the first such system for general interstitial photodynamic therapy employing MC methods and is therefore applicable to a very broad cross-section of anatomy and material properties. We demonstrate that evaluation of dose-volume histograms is an effective variance-reduction scheme in its own right which greatly reduces the number of packets required and hence runtime required to achieve acceptable result confidence. We conclude that MC methods are feasible for general PDT treatment evaluation and planning, and considerably less costly than widely believed.
Bernal, M A; Bordage, M C; Brown, J M C; Davídková, M; Delage, E; El Bitar, Z; Enger, S A; Francis, Z; Guatelli, S; Ivanchenko, V N; Karamitros, M; Kyriakou, I; Maigne, L; Meylan, S; Murakami, K; Okada, S; Payno, H; Perrot, Y; Petrovic, I; Pham, Q T; Ristic-Fira, A; Sasaki, T; Štěpán, V; Tran, H N; Villagrasa, C; Incerti, S
2015-12-01
Understanding the fundamental mechanisms involved in the induction of biological damage by ionizing radiation remains a major challenge of today's radiobiology research. The Monte Carlo simulation of physical, physicochemical and chemical processes involved may provide a powerful tool for the simulation of early damage induction. The Geant4-DNA extension of the general purpose Monte Carlo Geant4 simulation toolkit aims to provide the scientific community with an open source access platform for the mechanistic simulation of such early damage. This paper presents the most recent review of the Geant4-DNA extension, as available to Geant4 users since June 2015 (release 10.2 Beta). In particular, the review includes the description of new physical models for the description of electron elastic and inelastic interactions in liquid water, as well as new examples dedicated to the simulation of physicochemical and chemical stages of water radiolysis. Several implementations of geometrical models of biological targets are presented as well, and the list of Geant4-DNA examples is described.
NASA Astrophysics Data System (ADS)
Terzyk, Artur P.; Furmaniak, Sylwester; Gauden, Piotr A.; Harris, Peter J. F.; Wloch, Jerzy; Kowalczyk, Piotr
2007-10-01
The adsorption of gases on microporous carbons is still poorly understood, partly because the structure of these carbons is not well known. Here, a model of microporous carbons based on fullerene-like fragments is used as the basis for a theoretical study of Ar adsorption on carbon. First, a simulation box was constructed, containing a plausible arrangement of carbon fragments. Next, using a new Monte Carlo simulation algorithm, two types of carbon fragments were gradually placed into the initial structure to increase its microporosity. Thirty six different microporous carbon structures were generated in this way. Using the method proposed recently by Bhattacharya and Gubbins (BG), the micropore size distributions of the obtained carbon models and the average micropore diameters were calculated. For ten chosen structures, Ar adsorption isotherms (87 K) were simulated via the hyper-parallel tempering Monte Carlo simulation method. The isotherms obtained in this way were described by widely applied methods of microporous carbon characterisation, i.e. Nguyen and Do, Horvath-Kawazoe, high-resolution αs plots, adsorption potential distributions and the Dubinin-Astakhov (DA) equation. From simulated isotherms described by the DA equation, the average micropore diameters were calculated using empirical relationships proposed by different authors and they were compared with those from the BG method.
Monte Carlo simulation of Alaska wolf survival
NASA Astrophysics Data System (ADS)
Feingold, S. J.
1996-02-01
Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.
Monte Carlo simulation of Touschek effect.
Xiao, A.; Borland, M.; Accelerator Systems Division
2010-07-30
We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm(-3) and 1.1 g cm(-3) occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm(-3) and 1.1 g cm(-3) occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems. PMID:27300449
NASA Astrophysics Data System (ADS)
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm‑3 and 1.1 g cm‑3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
NASA Astrophysics Data System (ADS)
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm-3 and 1.1 g cm-3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
Lin, J. Y. Y.; Aczel, Adam A; Abernathy, Douglas L; Nagler, Stephen E; Buyers, W. J. L.; Granroth, Garrett E
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; DeMarco, John J.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Bhuiyan, Lutful Bari; Lamperski, Stanisław; Wu, Jianzhong; Henderson, Douglas
2012-08-30
Theoretical difficulties in describing the structure and thermodynamics of an ionic liquid double layer are often associated with the nonspherical shapes of ionic particles and extremely strong electrostatic interactions. The recent density functional theory predictions for the electrochemical properties of the double layer formed by a model ionic liquid wherein each cation is represented by two touching hard spheres, one positively charged and the other neutral, and each anion by a negatively charged hard spherical particle, remain untested in this strong coupling regime. We report results from a Monte Carlo simulation of this system. Because for an ionic liquid the Bjerrum length is exceedingly large, it is difficult to perform simulations under conditions of strong electrostatic coupling used in the previous density functional theory study. Results are obtained for a somewhat smaller (but still large) Bjerrum length so that reliable simulation data can be generated for a useful test of the corresponding theoretical predictions. On the whole, the density profiles predicted by the theory are quite good in comparison with the simulation data. The strong oscillations of ionic density profiles and the local electrostatic potential predicted by this theory are confirmed by simulation, although for a small electrode charge and strong electrostatic coupling, the theory predicts the contact ionic densities to be noticeably different from the Monte Carlo results. The theoretical results for the more important electrostatic potential profile at contact are given with good accuracy.
Shell model Monte Carlo methods
Koonin, S.E.; Dean, D.J.
1996-10-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.
Representation and simulation for pyrochlore lattice via Monte Carlo technique
NASA Astrophysics Data System (ADS)
Passos, André Luis; de Albuquerque, Douglas F.; Filho, João Batista Santos
2016-05-01
This work presents a representation of the Kagome and pyrochlore lattices using Monte Carlo simulation as well as some results of the critical properties. These lattices are composed corner sharing triangles and tetrahedrons respectively. The simulation was performed employing the Cluster Wolf Algorithm for the spin updates through the standard ferromagnetic Ising Model. The determination of the critical temperature and exponents was based on the Histogram Technique and the Finite-Size Scaling Theory.
Monte-Carlo simulation of Callisto's exosphere
NASA Astrophysics Data System (ADS)
Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.
2015-12-01
We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.
Chang, Qiang; Herbst, Eric
2014-06-01
We have designed an improved algorithm that enables us to simulate the chemistry of cold dense interstellar clouds with a full gas-grain reaction network. The chemistry is treated by a unified microscopic-macroscopic Monte Carlo approach that includes photon penetration and bulk diffusion. To determine the significance of these two processes, we simulate the chemistry with three different models. In Model 1, we use an exponential treatment to follow how photons penetrate and photodissociate ice species throughout the grain mantle. Moreover, the products of photodissociation are allowed to diffuse via bulk diffusion and react within the ice mantle. Model 2 is similar to Model 1 but with a slower bulk diffusion rate. A reference Model 0, which only allows photodissociation reactions to occur on the top two layers, is also simulated. Photodesorption is assumed to occur from the top two layers in all three models. We found that the abundances of major stable species in grain mantles do not differ much among these three models, and the results of our simulation for the abundances of these species agree well with observations. Likewise, the abundances of gas-phase species in the three models do not vary. However, the abundances of radicals in grain mantles can differ by up to two orders of magnitude depending upon the degree of photon penetration and the bulk diffusion of photodissociation products. We also found that complex molecules can be formed at temperatures as low as 10 K in all three models.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Maier, Thomas A; Alvarez, Gonzalo; Summers, Michael Stuart; Schulthess, Thomas C
2010-01-01
Using dynamic cluster quantum Monte Carlo simulations, we study the superconducting behavior of a 1=8 doped two-dimensional Hubbard model with imposed unidirectional stripelike charge-density-wave modulation. We find a significant increase of the pairing correlations and critical temperature relative to the homogeneous system when the modulation length scale is sufficiently large. With a separable form of the irreducible particle-particle vertex, we show that optimized superconductivity is obtained for a moderate modulation strength due to a delicate balance between the modulation enhanced pairing interaction, and a concomitant suppression of the bare particle-particle excitations by a modulation reduction of the quasiparticle weight.
Probability Forecasting Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Duncan, M.; Frisbee, J.; Wysack, J.
2014-09-01
Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a
Coherent Scattering Imaging Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Hassan, Laila Abdulgalil Rafik
Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness
Monte Carlo simulations of medical imaging modalities
Estes, G.P.
1998-09-01
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.
Zhang, Minhua; Chen, Lihang; Yang, Huaming; Sha, Xijiang; Ma, Jing
2016-07-01
Gibbs ensemble Monte Carlo simulation with configurational bias was employed to study the vapor-liquid equilibrium (VLE) for pure acetic acid and for a mixture of acetic acid and ethylene. An improved united-atom force field for acetic acid based on a Lennard-Jones functional form was proposed. The Lennard-Jones well depth and size parameters for the carboxyl oxygen and hydroxyl oxygen were determined by fitting the interaction energies of acetic acid dimers to the Lennard-Jones potential function. Four different acetic acid dimers and the proportions of them were considered when the force field was optimized. It was found that the new optimized force field provides a reasonable description of the vapor-liquid phase equilibrium for pure acetic acid and for the mixture of acetic acid and ethylene. Accurate values were obtained for the saturated liquid density of the pure compound (average deviation: 0.84 %) and for the critical points. The new optimized force field demonstrated greater accuracy and reliability in calculations of the solubility of the mixture of acetic acid and ethylene as compared with the results obtained with the original TraPPE-UA force field.
Multilayer adsorption by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Molina-Mateo, J.; Salmerón Sánchez, M.; Monleón Pradas, M.; Torregrosa Cabanilles, C.
2012-10-01
Adsorption phenomena are characterized by models that include free parameters trying to reproduce experimental results. In order to understand the relationship between the model parameters and the material properties, the adsorption of small molecules on a crystalline plane surface has been simulated using the bond fluctuation model. A direct comparison between the Guggenheim-Anderson-de Boer (GAB) model for multilayer adsorption and computer simulations allowed us to establish correlations between the adsorption model parameters and the simulated interaction potentials.
Monte Carlo simulations on SIMD computer architectures
Burmester, C.P.; Gronsky, R.; Wille, L.T.
1992-03-01
Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.
ERIC Educational Resources Information Center
Hannan, Peter J.; Murray, David M.
1996-01-01
A Monte Carlo study compared performance of linear and logistic mixed-model analyses of simulated community trials having specific event rates, intraclass correlations, and degrees of freedom. Results indicate that in studies with adequate denominator degrees of freedom, the researcher may use either method of analysis, with certain cautions. (SLD)
NASA Astrophysics Data System (ADS)
Gruziel, Magdalena; Rudnicki, Witold R.; Lesyng, Bogdan
2008-02-01
In this study, the hydration of a model Lennard-Jones solute particle and the analytical approximations of the free energy of hydration as functions of solute microscopic parameters are analyzed. The control parameters of the solute particle are the charge, the Lennard-Jones diameter, and also the potential well depth. The obtained multivariate free energy functions of hydration were parametrized based on Metropolis Monte Carlo simulations in the extended NpT ensemble, and interpreted based on mesoscopic solvation models proposed by Gallicchio and Levy [J. Comput. Chem. 25, 479 (2004)], and Wagoner and Baker [Proc. Natl. Acad. Sci. U.S.A. 103, 8331 (2006)]. Regarding the charge and the solute diameter, the dependence of the free energy on these parameters is in qualitative agreement with former studies. The role of the third parameter, the potential well depth not previously considered, appeared to be significant for sufficiently precise bivariate solvation free energy fits. The free energy fits for cations and neutral solute particles were merged, resulting in a compact manifold of the free energy of solvation. The free energy of hydration for anions forms two separate manifolds, which most likely results from an abrupt change of the coordination number when changing the size of the anion particle.
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Parallel Monte Carlo simulation of multilattice thin film growth
NASA Astrophysics Data System (ADS)
Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen
2001-07-01
This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.
Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.
Leigh, Jessica W; Bryant, David
2015-09-01
Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology. PMID:26012871
Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.
Leigh, Jessica W; Bryant, David
2015-09-01
Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.
NASA Astrophysics Data System (ADS)
Ševecek, Pavel; Broz, Miroslav; Nesvorny, David; Durda, Daniel D.; Asphaug, Erik; Walsh, Kevin J.; Richardson, Derek C.
2016-10-01
Detailed models of asteroid collisions can yield important constrains for the evolution of the Main Asteroid Belt, but the respective parameter space is large and often unexplored. We thus performed a new set of simulations of asteroidal breakups, i.e. fragmentations of intact targets, subsequent gravitational reaccumulation and formation of small asteroid families, focusing on parent bodies with diameters D = 10 km.Simulations were performed with a smoothed-particle hydrodynamics (SPH) code (Benz & Asphaug 1994), combined with an efficient N-body integrator (Richardson et al. 2000). We assumed a number of projectile sizes, impact velocities and impact angles. The rheology used in the physical model does not include friction nor crushing; this allows for a direct comparison to results of Durda et al. (2007). Resulting size-frequency distributions are significantly different from scaled-down simulations with D = 100 km monolithic targets, although they may be even more different for pre-shattered targets.We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions in N-body simulations of small asteroid families. Finally, we discuss various uncertainties related to SPH simulations.
Coherent scatter imaging Monte Carlo simulation.
Hassan, Laila; MacDonald, Carolyn A
2016-07-01
Conventional mammography can suffer from poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter slot scan imaging is an imaging technique which provides additional information and is compatible with conventional mammography. A Monte Carlo simulation of coherent scatter slot scan imaging was performed to assess its performance and provide system optimization. Coherent scatter could be exploited using a system similar to conventional slot scan mammography system with antiscatter grids tilted at the characteristic angle of cancerous tissues. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The simulated carcinomas were detectable for tumors as small as 5 mm in diameter, so coherent scatter analysis using a wide-slot setup could be promising as an enhancement for screening mammography. Employing coherent scatter information simultaneously with conventional mammography could yield a conventional high spatial resolution image with additional coherent scatter information. PMID:27610397
Kinetic Monte Carlo simulations of proton conductivity
NASA Astrophysics Data System (ADS)
Masłowski, T.; Drzewiński, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Frączek, M.; Nordlund, K.; Kuronen, A.
2014-07-01
The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
McGrath, Matthew J; Kuo, I-F Will; Ngouana W, Brice F; Ghogomu, Julius N; Mundy, Christopher J; Marenich, Aleksandr V; Cramer, Christopher J; Truhlar, Donald G; Siepmann, J Ilja
2013-08-28
The Gibbs free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation/quantum chemical approach at four temperatures between T = 300 and 450 K. The Gibbs free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state Gibbs free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system's potential energy. The latter Gibbs free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation Gibbs free energies agree very well with available experimental data.
NASA Astrophysics Data System (ADS)
Saika, Yohei
2008-02-01
On the basis of statistical mechanics of the Q-Ising model we formulate the problem of inverse-halftoning for the halftone image which is obtained by the error diffusion method using the Floyd-Steinburg and two weight kernels. Then using the Markov-Chain Monte Carlo simulation both for a set of the snapshots of the Q-Ising model and a gray-level standard image, we estimate the performance of our method based on the mean square error and the edge structures observed both in the halftone image and reconstructed images, such as the edge length and the gradient of the gray-level. We clarify that our method reconstructs the gray-level image from the halftone image by suppressing the gradient of the gray-level on the edges embedded in the halftone image and by removing a part of the edges if we appropriately set parameters of our model.
Papadimitroulas, P; Kagadis, GC; Loudos, G
2014-06-15
Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and
Chen, Yunjie; Roux, Benoît
2015-08-11
Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up
2015-01-01
Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up
Mohammadyari, P; Faghihi, R; Shirazi, M Mosleh; Lotfi, M; Meigooni, A
2014-06-01
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7±5.7 and 7.7±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry
Choi, Myunghee; Chan, Vincent S.
2014-02-28
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.
1998-12-01
A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.
Self-Consistent Monte Carlo Simulations of Positive Discharges
NASA Astrophysics Data System (ADS)
Kortshagen, Uwe; Lawler, James E.
1999-10-01
Fully converged simulations of positive column discharges using single electron or ``direct simulation'' Monte Carlo codes were reported at GEC98. Initial solutions at low RxN (product of column radius and gas density) were found using only personal computers. Solutions to higher RxN, corresponding to an ion mean-free-path of 1/4 the column radius, have now been found using a supercomputer. Sixteen converged simulations, reaching a Debye length of 1/17 the column radius, are available [1]. The simulations illustrate sheath formation and the negative dynamic resistance of the positive column at low currents. The simulation results have been reproduced using entirely independent codes. No fluid approximations or plasma-sheath boundary conditions are used. The simulations are valuable for comparison to other types of fluid and kinetic theory models. [0em] [1] J. E. Lawler and U. Kortshagen, J. Phys. D: Appl. Phys., submitted.
Resist develop prediction by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun
2002-07-01
Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.
NASA Technical Reports Server (NTRS)
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
Wiebe, J; Ploquin, N
2014-08-15
Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4 (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.
NASA Astrophysics Data System (ADS)
Fougere, Nicolas; altwegg, kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Bockelee-Morvan, Dominique; Calmonte, Ursina; Capaccioni, Fabrizio; Combi, Michael R.; De Keyser, Johan; Debout, Vincent; Erard, Stéphane; Fiethe, Björn; Filacchione, Gianrico; Fink, Uwe; Fuselier, Stephen; Gombosi, T. I.; Hansen, Kenneth C.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Migliorini, Alessandra; Piccioni, Giuseppe; Rinaldi, Giovanna; Rubin, Martin; Shou, Yinsi; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; VIRTIS team and ROSINA team
2016-10-01
During the past few decades, modeling of cometary coma has known tremendous improvements notably with the increase of computer capacity. While the Haser model is still widely used for interpretation of cometary observations, its rather simplistic assumptions such as spherical symmetry and constant outflow velocity prevent it to explain some of the coma observations. Hence, more complex coma models have emerged taking full advantage of the numerical approach. The only method that can resolve all the flow regimes encountered in the coma due to the drastic changes of Knudsen numbers is the Direct Simulation Monte-Carlo (DSMC) approach.The data acquired by the instruments on board of the Rosetta spacecraft provides a large amount of observations regarding the spatial and temporal variations of comet 67P/Churyumov-Gerasimenko's coma. These measurements provide constraints that can be applied to the coma model in order to describe best the rarefied atmosphere of 67P. We present the last results of our 3D multi-species DSMC model using the Adaptive Mesh Particle Simulator (Tenishev et al. 2008 and 2011, Fougere 2014). The model uses a realistic nucleus shape model from the OSIRIS team and takes into account the self-shadowing created by its concavities. The gas flux at the surface of the nucleus is deduced from the relative orientation with respect to the Sun and an activity distribution that enables to simulate both the non-uniformity of the surface activity and the heterogeneities of the outgassing.The model results are compared to the ROSINA and VIRTIS observations. Progress in incorporating Rosetta measurements from the last half of the mission into our models will be presented. The good agreement between the model and these measurements from two very different techniques reinforces our understanding of the physical processes taking place in the coma.
Leonhard, Kai; Prausnitz, John M.; Radke, Clayton J.
2004-01-01
Amino acid residue–solvent interactions are required for lattice Monte Carlo simulations of model proteins in water. In this study, we propose an interaction-energy scale that is based on the interaction scale by Miyazawa and Jernigan. It permits systematic variation of the amino acid–solvent interactions by introducing a contrast parameter for the hydrophobicity, Cs, and a mean attraction parameter for the amino acids, ω. Changes in the interaction energies strongly affect many protein properties. We present an optimized energy parameter set for best representing realistic behavior typical for many proteins (fast folding and high cooperativity for single chains). Our optimal parameters feature a much weaker hydrophobicity contrast and mean attraction than does the original interaction scale. The proposed interaction scale is designed for calculating the behavior of proteins in bulk and at interfaces as a function of solvent characteristics, as well as protein size and sequence. PMID:14739322
NASA Astrophysics Data System (ADS)
Shi, Feng; Wang, Dezhen; Ren, Chunsheng
2008-06-01
Atmospheric pressure discharge nonequilibrium plasmas have been applied to plasma processing with modern technology. Simulations of discharge in pure Ar and pure He gases at one atmospheric pressure by a high voltage trapezoidal nanosecond pulse have been performed using a one-dimensional particle-in-cell Monte Carlo collision (PIC-MCC) model coupled with a renormalization and weighting procedure (mapping algorithm). Numerical results show that the characteristics of discharge in both inert gases are very similar. There exist the effects of local reverse field and double-peak distributions of charged particles' density. The electron and ion energy distribution functions are also observed, and the discharge is concluded in the view of ionization avalanche in number. Furthermore, the independence of total current density is a function of time, but not of position.
NASA Astrophysics Data System (ADS)
Gratiy, Sergey L.; Walker, Andrew C.; Levin, Deborah A.; Goldstein, David B.; Varghese, Philip L.; Trafton, Laurence M.; Moore, Chris H.
2010-05-01
Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. Improving upon previous models, Walker et al. (Walker, A.C., Gratiy, S.L., Levin, D.A., Goldstein, D.B., Varghese, P.L., Trafton, L.M., Moore, C.H., Stewart, B. [2009]. Icarus) developed a fully 3-D global rarefied gas dynamics model of Io's atmosphere including both sublimation and volcanic sources of SO 2 gas. The fidelity of the model is tested by simulating remote observations at selected wavelength bands and comparing them to the corresponding astronomical observations of Io's atmosphere. The simulations are performed with a new 3-D spherical-shell radiative transfer code utilizing a backward Monte Carlo method. We present: (1) simulations of the mid-infrared disk-integrated spectra of Io's sunlit hemisphere at 19 μm, obtained with TEXES during 2001-2004; (2) simulations of disk-resolved images at Lyman- α obtained with the Hubble Space Telescope (HST), Space Telescope Imaging Spectrograph (STIS) during 1997-2001; and (3) disk-integrated simulations of emission line profiles in the millimeter wavelength range obtained with the IRAM-30 m telescope in October-November 1999. We found that the atmospheric model generally reproduces the longitudinal variation in band depth from the mid-infrared data; however, the best match is obtained when our simulation results are shifted ˜30° toward lower orbital longitudes. The simulations of Lyman- α images do not reproduce the mid-to-high latitude bright patches seen in the observations, suggesting that the model atmosphere sustains columns that are too high at those latitudes. The simulations of emission line profiles in the millimeter spectral region support
NASA Astrophysics Data System (ADS)
Soligo, Riccardo
In this work, the insight provided by our sophisticated Full Band Monte Carlo simulator is used to analyze the behavior of state-of-art devices like GaN High Electron Mobility Transistors and Hot Electron Transistors. Chapter 1 is dedicated to the description of the simulation tool used to obtain the results shown in this work. Moreover, a separate section is dedicated the set up of a procedure to validate to the tunneling algorithm recently implemented in the simulator. Chapter 2 introduces High Electron Mobility Transistors (HEMTs), state-of-art devices characterized by highly non linear transport phenomena that require the use of advanced simulation methods. The techniques for device modeling are described applied to a recent GaN-HEMT, and they are validated with experimental measurements. The main techniques characterization techniques are also described, including the original contribution provided by this work. Chapter 3 focuses on a popular technique to enhance HEMTs performance: the down-scaling of the device dimensions. In particular, this chapter is dedicated to lateral scaling and the calculation of a limiting cutoff frequency for a device of vanishing length. Finally, Chapter 4 and Chapter 5 describe the modeling of Hot Electron Transistors (HETs). The simulation approach is validated by matching the current characteristics with the experimental one before variations of the layouts are proposed to increase the current gain to values suitable for amplification. The frequency response of these layouts is calculated, and modeled by a small signal circuit. For this purpose, a method to directly calculate the capacitance is developed which provides a graphical picture of the capacitative phenomena that limit the frequency response in devices. In Chapter 5 the properties of the hot electrons are investigated for different injection energies, which are obtained by changing the layout of the emitter barrier. Moreover, the large signal characterization of the
Morphological evolution of growing crystals - A Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz
1988-01-01
The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear
Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.
2012-11-01
A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.
Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography
Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.
2000-02-01
The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Wu, H; Baynes, R E; Leavens, T; Tell, L A; Riviere, J E
2013-06-01
The objective of this study was to develop a population pharmacokinetic (PK) model and predict tissue residues and the withdrawal interval (WDI) of flunixin in cattle. Data were pooled from published PK studies in which flunixin was administered through various dosage regimens to diverse populations of cattle. A set of liver data used to establish the regulatory label withdrawal time (WDT) also were used in this study. Compartmental models with first-order absorption and elimination were fitted to plasma and liver concentrations by a population PK modeling approach. Monte Carlo simulations were performed with the population mean and variabilities of PK parameters to predict liver concentrations of flunixin. The PK of flunixin was described best by a 3-compartment model with an extra liver compartment. The WDI estimated in this study with liver data only was the same as the label WDT. However, a longer WDI was estimated when both plasma and liver data were included in the population PK model. This study questions the use of small groups of healthy animals to determine WDTs for drugs intended for administration to large diverse populations. This may warrant a reevaluation of the current procedure for establishing WDT to prevent violative residues of flunixin.
NASA Astrophysics Data System (ADS)
Luo, L.
2011-12-01
Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.
Monte Carlo modeling and meteor showers
NASA Technical Reports Server (NTRS)
Kulikova, N. V.
1987-01-01
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.
NASA Technical Reports Server (NTRS)
Combi, Michael R.
2004-01-01
In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic (MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important. At the University of Michigan we have an established base of experience and expertise in numerical simulations based on particle codes which address these physical regimes. The Principal Investigator, Dr. Michael Combi, has over 20 years of experience in the development of particle-kinetic and hybrid kinetichydrodynamics models and their direct use in data analysis. He has also worked in ground-based and space-based remote observational work and on spacecraft instrument teams. His research has involved studies of cometary atmospheres and ionospheres and their interaction with the solar wind, the neutral gas clouds escaping from Jupiter s moon Io, the interaction of the atmospheres/ionospheres of Io and Europa with Jupiter s corotating magnetosphere, as well as Earth s ionosphere. This report describes our progress during the year. The contained in section 2 of this report will serve as the basis of a paper describing the method and its application to the cometary coma that will be continued under a research and analysis grant that supports various applications of theoretical comet models to understanding the
Lou, K; Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y; Clark, J
2014-06-01
Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is
Cheng, Feng; Chen, Zhao-Xu
2016-02-01
Pd/ZnO is a promising catalyst studied for methanol steam reforming (MSR) and the 1 : 1 PdZn alloy is demonstrated to be the active component. It is believed that MSR starts from methanol dehydrogenation to methoxy. Previous studies of methanol dehydrogenation on the ideal PdZn(111) surface show that methanol adsorbs weakly on the PdZn(111) surface and it is hard for methanol to transform into methoxy because of the high dehydrogenation barrier, indicating that the catalyst model is not appropriate for investigating the first step of MSR. Using the model derived from our recent kinetic Monte Carlo simulations, we examined the process CH3OH → CH3O → CH2O → CHO → CO. Compared with the ideal model, methanol adsorbs much more strongly and the barrier from CH3OH → CH3O is much lower on the kMC model. On the other hand, the C-H bond breaking of CH3O, CH2O and CHO becomes harder. We show that co-adsorbed water is important for refreshing the active sites. The present study shows that the first MSR step most likely takes place on three-fold hollow sites formed by Zn atoms, and the inhomogeneity of the PdZn alloy may exert significant influences on reactions. PMID:26771029
Cheng, Feng; Chen, Zhao-Xu
2016-02-01
Pd/ZnO is a promising catalyst studied for methanol steam reforming (MSR) and the 1 : 1 PdZn alloy is demonstrated to be the active component. It is believed that MSR starts from methanol dehydrogenation to methoxy. Previous studies of methanol dehydrogenation on the ideal PdZn(111) surface show that methanol adsorbs weakly on the PdZn(111) surface and it is hard for methanol to transform into methoxy because of the high dehydrogenation barrier, indicating that the catalyst model is not appropriate for investigating the first step of MSR. Using the model derived from our recent kinetic Monte Carlo simulations, we examined the process CH3OH → CH3O → CH2O → CHO → CO. Compared with the ideal model, methanol adsorbs much more strongly and the barrier from CH3OH → CH3O is much lower on the kMC model. On the other hand, the C-H bond breaking of CH3O, CH2O and CHO becomes harder. We show that co-adsorbed water is important for refreshing the active sites. The present study shows that the first MSR step most likely takes place on three-fold hollow sites formed by Zn atoms, and the inhomogeneity of the PdZn alloy may exert significant influences on reactions.
Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters
Brandt, D.; Asai, M.; Brink, P.L.; Cabrera, B.; Silva, E.do Couto e; Kelsey, M.; Leman, S.W.; McArthy, K.; Resch, R.; Wright, D.; Figueroa-Feliciano, E.; /MIT
2012-06-12
There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature, focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.
Technology Transfer Automated Retrieval System (TEKTRAN)
A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
NASA Astrophysics Data System (ADS)
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
Simulating oblique incident irradiation using the BEAMnrc Monte Carlo code.
Downes, P; Spezi, E
2009-04-01
A new source for the simulation of oblique incident irradiation has been developed for the BEAMnrc Monte Carlo code. In this work, we describe a method for the simulation of any component that is rotated at some angle relative to the central axis of the modelled radiation unit. The performance of the new BEAMnrc source was validated against experimental measurements. The comparison with ion chamber data showed very good agreement between experiments and calculation for a number of oblique irradiation angles ranging from 0 degrees to 30 degrees . The routine was also cross-validated, in geometrically equivalent conditions, against a different radiation source available in the DOSXYZnrc code. The test showed excellent consistency between the two routines. The new radiation source can be particularly useful for the Monte Carlo simulation of radiation units in which the radiation beam is tilted with respect to the unit's central axis. To highlight this, a modern cone-beam CT unit is modelled using this new source and validated against measurement.
Monte Carlo simulation of quantum Zeno effect in the brain
NASA Astrophysics Data System (ADS)
Georgiev, Danko
2015-12-01
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Kern, Christoph
2016-01-01
This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.
Kern, Christoph
2016-03-23
This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Monte Carlo simulation of large electron fields.
Faddegon, Bruce A; Perl, Joseph; Asai, Makoto
2008-03-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.
Monte Carlo simulation of large electron fields
Faddegon, Bruce A; Perl, Joseph; Asai, Makoto
2010-01-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different “physics lists,” were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy. PMID:18296775
Monte Carlo simulation of large electron fields
NASA Astrophysics Data System (ADS)
Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto
2008-03-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.
Monte Carlo simulations for spinodal decomposition
Sander, E.; Wanner, T.
1999-06-01
This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, the authors are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u{sub 0} {equivalent_to} {mu} in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u{sub 0}. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, the authors numerically compare these two mathematical approaches. In fact, they are able to synthesize the understanding they gain from the numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, they can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. The approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter {var_epsilon} of the governing equation. The authors give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. They observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. They explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. They also describe the dynamics of these exceptional solutions.
NASA Astrophysics Data System (ADS)
Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael
2013-05-01
In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.
Monte-Carlo simulations of chemical reactions in molecular crystals
NASA Astrophysics Data System (ADS)
Even, J.; Bertault, M.
1999-01-01
Chemical reactions in molecular crystals, yielding new entities (dimers, trimers,…, polymers) in the original structure, are simulated for the first time by stochastic Monte Carlo methods. The results are compared with those obtained by deterministic methods. They show that numerical simulation is a tool for understanding the evolution of these mixed systems. They are in kinetic and not in thermodynamic control. Reactive site distributions, x-ray diffuse scattering, and chain length distributions can be simulated. Comparisons are made with deterministic models and experimental results obtained in the case of the solid state dimerization of cinnamic acid in the beta phase and in the case of the solid state polymerization of diacetylenes.
Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor
2015-03-01
Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.
Papadimitroulas, P; Kostou, T; Kagadis, G; Loudos, G
2015-06-15
Purpose: The purpose of the present study was to quantify, evaluate the impact of cardiac and respiratory motion on clinical nuclear imaging protocols. Common SPECT and scintigraphic scans are studied using Monte Carlo (MC) simulations, comparing the resulted images with and without motion. Methods: Realistic simulations were executed using the GATE toolkit and the XCAT anthropomorphic phantom as a reference model for human anatomy. Three different radiopharmaceuticals based on 99mTc were studied, namely 99mTc-MDP, 99mTc—N—DBODC and 99mTc—DTPA-aerosol for bone, myocardium and lung scanning respectively. The resolution of the phantom was set to 3.5 mm{sup 3}. The impact of the motion on spatial resolution was quantified using a sphere with 3.5 mm diameter and 10 separate time frames, in the ECAM modeled SPECT scanner. Finally, respiratory motion impact on resolution and imaging of lung lesions was investigated. The MLEM algorithm was used for data reconstruction, while the literature derived biodistributions of the pharmaceuticals were used as activity maps in the simulations. Results: FWHM was extracted for a static and a moving sphere which was ∼23 cm away from the entrance of the SPECT head. The difference in the FWHM was 20% between the two simulations. Profiles in thorax were compared in the case of bone scintigraphy, showing displacement and blurring of the bones when respiratory motion was inserted in the simulation. Large discrepancies were noticed in the case of myocardium imaging when cardiac motion was incorporated during the SPECT acquisition. Finally the borders of the lungs are blurred when respiratory motion is included resulting to a dislocation of ∼2.5 cm. Conclusion: As we move to individualized imaging and therapy procedures, quantitative and qualitative imaging is of high importance in nuclear diagnosis. MC simulations combined with anthropomorphic digital phantoms can provide an accurate tool for applications like motion correction
van Bijnen, M; Korving, H; Clemens, F
2012-01-01
In-sewer defects are directly responsible for affecting the performance of sewer systems. Notwithstanding the impact of the condition of the assets on serviceability, sewer performance is usually assessed assuming the absence of in-sewer defects. This leads to an overestimation of serviceability. This paper presents the results of a study in two research catchments on the impact of in-sewer defects on urban pluvial flooding at network level. Impacts are assessed using Monte Carlo simulations with a full hydrodynamic model of the sewer system. The studied defects include root intrusion, surface damage, attached and settled deposits, and sedimentation. These defects are based on field observations and translated to two model parameters (roughness and sedimentation). The calculation results demonstrate that the return period of flooding, number of flooded locations and flooded volumes are substantially affected by in-sewer defects. Irrespective of the type of sewer system, the impact of sedimentation is much larger than the impact of roughness. Further research will focus on comparing calculated and measured behaviour in one of the research catchments.
Choi, M.; Chan, V. S.; Lao, L. L.; Pinsker, R. I.; Green, D.; Berry, L. A.; Jaeger, F.; Park, J. M.; Heidbrink, W. W.; Liu, D.; Podesta, M.; Harvey, R.; Smithe, D. N.; Bonoli, P.
2010-05-15
The five-dimensional finite-orbit Monte Carlo code ORBIT-RF[M. Choi et al., Phys. Plasmas 12, 1 (2005)] is successfully coupled with the two-dimensional full-wave code all-orders spectral algorithm (AORSA) [E. F. Jaeger et al., Phys. Plasmas 13, 056101 (2006)] in a self-consistent way to achieve improved predictive modeling for ion cyclotron resonance frequency (ICRF) wave heating experiments in present fusion devices and future ITER [R. Aymar et al., Nucl. Fusion 41, 1301 (2001)]. The ORBIT-RF/AORSA simulations reproduce fast-ion spectra and spatial profiles qualitatively consistent with fast ion D-alpha [W. W. Heidbrink et al., Plasma Phys. Controlled Fusion 49, 1457 (2007)] spectroscopic data in both DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] and National Spherical Torus Experiment [M. Ono et al., Nucl. Fusion 41, 1435 (2001)] high harmonic ICRF heating experiments. This work verifies that both finite-orbit width effect of fast-ion due to its drift motion along the torus and iterations between fast-ion distribution and wave fields are important in modeling ICRF heating experiments.
Choi, M.; Green, David L; Heidbrink, W. W.; Harvey, R. W.; Liu, D.; Chan, V. S.; Berry, Lee A; Jaeger, Erwin Frederick; Lao, L.L.; Pinsker, R. I.; Podesta, M.; Smithe, D. N.; Park, J. M.; Bonoli, P.
2010-01-01
The five-dimensional finite-orbit Monte Carlo code ORBIT-RF [M. Choi , Phys. Plasmas 12, 1 (2005)] is successfully coupled with the two-dimensional full-wave code all-orders spectral algorithm (AORSA) [E. F. Jaeger , Phys. Plasmas 13, 056101 (2006)] in a self-consistent way to achieve improved predictive modeling for ion cyclotron resonance frequency (ICRF) wave heating experiments in present fusion devices and future ITER [R. Aymar , Nucl. Fusion 41, 1301 (2001)]. The ORBIT-RF/AORSA simulations reproduce fast-ion spectra and spatial profiles qualitatively consistent with fast ion D-alpha [W. W. Heidbrink , Plasma Phys. Controlled Fusion 49, 1457 (2007)] spectroscopic data in both DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] and National Spherical Torus Experiment [M. Ono , Nucl. Fusion 41, 1435 (2001)] high harmonic ICRF heating experiments. This work verifies that both finite-orbit width effect of fast-ion due to its drift motion along the torus and iterations between fast-ion distribution and wave fields are important in modeling ICRF heating experiments. (C) 2010 American Institute of Physics. [doi:10.1063/1.3314336
On the time scale associated with Monte Carlo simulations
Bal, Kristof M. Neyts, Erik C.
2014-11-28
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.
On the time scale associated with Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bal, Kristof M.; Neyts, Erik C.
2014-11-01
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.
2014-01-01
The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
ERIC Educational Resources Information Center
Nylund, Karen L.; Asparouhov, Tihomir; Muthen, Bengt O.
2007-01-01
Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study…
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy
Tennant, Sarah J.; Burgess, Donna R.; Rybak, Jeffrey M.; Martin, Craig A.; Burgess, David S.
2015-01-01
Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA) of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs) from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets. PMID:27025644
NASA Astrophysics Data System (ADS)
Obot, I. B.; Kaya, Savaş; Kaya, Cemal; Tüzün, Burak
2016-06-01
DFT and Monte Carlo simulation were performed on three Schiff bases namely, 4-(4-bromophenyl)-N‧-(4-methoxybenzylidene)thiazole-2-carbohydrazide (BMTC), 4-(4-bromophenyl)-N‧-(2,4-dimethoxybenzylidene)thiazole-2-carbohydrazide (BDTC), 4-(4-bromophenyl)-N‧-(4-hydroxybenzylidene)thiazole-2-carbohydrazide (BHTC) recently studied as corrosion inhibitor for steel in acid medium. Electronic parameters relevant to their inhibition activity such as EHOMO, ELUMO, Energy gap (ΔE), hardness (η), softness (σ), the absolute electronegativity (χ), proton affinity (PA) and nucleophilicity (ω) etc., were computed and discussed. Monte Carlo simulations were applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are in most cases in agreement with experimental results.
Monte Carlo simulation of the terrestrial hydrogen exosphere
Hodges, R.R. Jr.
1994-12-01
Methods for Monte Carlo simulation of planetary exospheres have evolved from early work on the lunar atmosphere, where the regolith surface provides a well defined exobase. A major limitation of the successor simulations of the exospheres of Earth and Venus is the use of an exobase surface as an artifice to separate the collisional processes of the thermosphere from a collisionles exosphere. In this paper a new generalized approach to exosphere simulation is described, wherein the exobase is replaced by a barometric depletion of the major constitents of the thermosphere. Exospheric atoms in the thermosphere-exosphere transition region, and in the outer exosphere as well, travel in ballistic trajectories that are interrupted by collisons with the background gas, and by charge exchange interactions with ionospheric particles. The modified simulator has been applied to the terrestrial hydrogen exosphere problem, using velocity dependent differential cross sections to provide statistically correct collisional scattering in H-O and H-H(+) interactions. Global models are presented for both solstice and equinox over the effective solar cycle range of the F{sub 10.7} index (80 to 230). Simulation results show significant differences with previous terrestrial exosphere models, as well as with the H distributions of the MSIS-86 thermosphere model.
Monte Carlo simulation of the terrestrial hydrogen exosphere
NASA Technical Reports Server (NTRS)
Hodges, R. Richard, Jr.
1994-01-01
Methods for Monte Carlo simulation of planetary exospheres have evolved from early work on the lunar atmosphere, where the regolith surface provides a well defined exobase. A major limitation of the successor simulations of the exospheres of Earth and Venus is the use of an exobase surface as an artifice to separate the collisional processes of the thermosphere from a collisionles exosphere. In this paper a new generalized approach to exosphere simulation is described, wherein the exobase is replaced by a barometric depletion of the major constitents of the thermosphere. Exospheric atoms in the thermosphere-exosphere transition region, and in the outer exosphere as well, travel in ballistic trajectories that are interrupted by collisons with the background gas, and by charge exchange interactions with ionospheric particles. The modified simulator has been applied to the terrestrial hydrogen exosphere problem, using velocity dependent differential cross sections to provide statistically correct collisional scattering in H-O and H-H(+) interactions. Global models are presented for both solstice and equinox over the effective solar cycle range of the F(sub 10.7) index (80 to 230). Simulation results show significant differences with previous terrestrial exosphere models, as well as with the H distributions of the MSIS-86 thermosphere model.
ERIC Educational Resources Information Center
Rutherford, Brent M.
A large number of correlational models for cross-tabular analysis are available for utilization by social scientists for data description. Criteria for selection (such as levels of measurement and proportional reduction in error) do not lead to conclusive model choice. Moreover, such criteria may be irrelevant. More pertinent criteria are…
Rivard, Mark J.
2009-02-15
Smaller diameter brachytherapy seeds for permanent interstitial implantation allow for use of smaller diameter implant needles. The use of smaller diameter needles may provide a lower incidence of healthy-tissue complications. This study determines the brachytherapy dosimetry parameters for the smaller diameter source (model 9011) and comments on the dosimetric comparison between this new source and the conventional brachytherapy seed (model 6711).
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2016-01-01
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
Monte Carlo simulations of nanoscale focused neon ion beam sputtering.
Timilsina, Rajendra; Rack, Philip D
2013-12-13
A Monte Carlo simulation is developed to model the physical sputtering of aluminum and tungsten emulating nanoscale focused helium and neon ion beam etching from the gas field ion microscope. Neon beams with different beam energies (0.5-30 keV) and a constant beam diameter (Gaussian with full-width-at-half-maximum of 1 nm) were simulated to elucidate the nanostructure evolution during the physical sputtering of nanoscale high aspect ratio features. The aspect ratio and sputter yield vary with the ion species and beam energy for a constant beam diameter and are related to the distribution of the nuclear energy loss. Neon ions have a larger sputter yield than the helium ions due to their larger mass and consequently larger nuclear energy loss relative to helium. Quantitative information such as the sputtering yields, the energy-dependent aspect ratios and resolution-limiting effects are discussed.
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
NASA Astrophysics Data System (ADS)
Cullen, John J.
Part I begins with an account of groups of Lie -Back-lund (L-B) tangent transformations; it is then shown that L-B symmetry operators depending on integrals (nonlocal variables), such as discussed by Konopelchenko and Mokhnachev (1979), are related by change of variables to the L-B operators which involve no more than derivatives. A general method is set down for transforming a given L-B operator into a new one, by any invertible transformation depending on (. . ., D(,x)('-1) u, u, u(,x), . . .). It is shown that once a given differential equation admits a L-B operator, there is in general a very large number of related ("secondary") equations which admit the same operator. The L-B Theory involving nonlocal variables is used to characterize group theoretically the linearization both of the Burgers equation, u(,t) + uu(,x) - u(,xx) = 0, and of the o.d.e. u(,xx) + (omega)('2)(x)u + Ku('-3) = 0. Secondary equations are found to play an important role in understanding the group theoretical background to the linearization of differential equations. Part II deals with Monte Carlo simulations of the l-d quantum Heisenberg and XY-models, using an approach suggested by Suzuki (1976). The simulation is actually carried out on a 2-d, m x N, Isinglike system, equivalent to the original N-spin quantum system when m (--->) (INFIN). The results for m (LESSTHEQ) 10 and kT/(VBAR)J(VBAR) (GREATERTHEQ) .0125 are good enough to show that the method is generally applicable to quantum spin models; however some difficulties caused by singular bonding in the classical lattice (Wiesler 1982) and by the generation of unwanted states have to be taken into account in practice. The finite-size scaling method of Fisher and Ferdinard is adapted for use near T = 0 in the ferromagnetic Heisenberg model; applied to the simulation data it shows that the low temperature susceptibiltiy behaves at T('-(gamma)), where (gamma) = 1.32 (+OR-) 10%. Also, simple and potentially useful finite-size scaling
NASA Astrophysics Data System (ADS)
Mumford, K. G.; Mustafa, N. A.; Gerhard, J.
2012-12-01
At many former industrial sites, nonaqueous phase liquid (NAPL) contamination presents a significant limitation to site closure and brownfield redevelopment. Achieving site closure means soil and/or groundwater remediation to a level at which the associated risk is reduced to an acceptable level. In some jurisdictions, this risk is evaluated at the site boundary even if the critical risk receptors are located in the surrounding community; the consequence may be a site left untreated because the remediation target is technically or economically impractical. The goal of this study was to explore the implications of assessing risk at the site boundary versus in the community and the factors that affect the differences between the two. Because the controlling risk pathway for many volatile organic compounds (VOCs) is the contamination of indoor air, risk assessment at the community scale requires simulation tools that can predict the transport of dissolved VOCs in groundwater followed by vapour intrusion into residential houses. Existing tools and research had focused on vapour intrusion only in the near vicinity of the source (i.e., scale of meters) and primarily at steady s tate. Therefore, this work developed a novel numerical simulator that coupled an established groundwater flow and contaminant transport model to a state-of-the-art vapor intrusion model, which enables the prediction of indoor air concentrations in response to an evolving groundwater plume at the community (i.e., kilometre) scale. In the first phase of this work, the extent of source zone remediation required to achieve regulatory compliance at the site boundary was compared to the extent required to achieve compliance at receptors in the community. The sensitivity of this difference to physicochemical properties of the contaminant and whether compliance was based on groundwater or indoor air risk receptors was evaluated. In the second phase of this work, the influence of heterogeneity on the
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
ERIC Educational Resources Information Center
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Campbell, L.; Purviance, J.
1992-01-01
A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.
NASA Astrophysics Data System (ADS)
Díez, A.; Largo, J.; Solana, J. R.
2006-08-01
Computer simulations have been performed for fluids with van der Waals potential, that is, hard spheres with attractive inverse power tails, to determine the equation of state and the excess energy. On the other hand, the first- and second-order perturbative contributions to the energy and the zero- and first-order perturbative contributions to the compressibility factor have been determined too from Monte Carlo simulations performed on the reference hard-sphere system. The aim was to test the reliability of this "exact" perturbation theory. It has been found that the results obtained from the Monte Carlo perturbation theory for these two thermodynamic properties agree well with the direct Monte Carlo simulations. Moreover, it has been found that results from the Barker-Henderson [J. Chem. Phys. 47, 2856 (1967)] perturbation theory are in good agreement with those from the exact perturbation theory.
Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina
NASA Astrophysics Data System (ADS)
Chen, Xiaoyan; Lane, Stephen
2010-02-01
We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.
Monte Carlo modelling of TRIGA research reactor
NASA Astrophysics Data System (ADS)
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Rikvold, P. A.
2015-01-01
The Ziff-Gulari-Barshad (ZGB) model, a simplified description of the oxidation of carbon monoxide (CO) on a catalyst surface, is widely used to study properties of nonequilibrium phase transitions. In particular, it exhibits a nonequilibrium, discontinuous transition between a reactive and a CO poisoned phase. If one allows a nonzero rate of CO desorption (k ), the line of phase transitions terminates at a critical point (kc). In this work, instead of restricting the CO and atomic oxygen (O) to react to form carbon dioxide (CO2) only when they are adsorbed in close proximity, we consider a modified model that includes an adjustable probability for adsorbed CO and O atoms located far apart on the lattice to react. We employ large-scale Monte Carlo simulations for system sizes up to 240 ×240 lattice sites, using the crossing of fourth-order cumulants to study the critical properties of this system. We find that the nonequilibrium critical point changes from the two-dimensional Ising universality class to the mean-field universality class upon introducing even a weak long-range reactivity mechanism. This conclusion is supported by measurements of cumulant fixed-point values, cluster percolation probabilities, correlation-length finite-size scaling properties, and the critical exponent ratio β /ν . The observed behavior is consistent with that of the equilibrium Ising ferromagnet with additional weak long-range interactions [T. Nakada, P. A. Rikvold, T. Mori, M. Nishino, and S. Miyashita, Phys. Rev. B 84, 054433 (2011), 10.1103/PhysRevB.84.054433]. The large system sizes and the use of fourth-order cumulants also enable determination with improved accuracy of the critical point of the original ZGB model with CO desorption.
NASA Astrophysics Data System (ADS)
Dunne, Lawrence J.; Manos, George; Rekabi, Mahdi
2009-01-01
Adsorption of xenon in carbon nanotubes has been investigated by Kuznetsova et al. [A. Kuznetsova, J.T. Yates Jr., J. Liu, R.E. Smalley, J. Chem. Phys. 112 (2000) 9590] and Simonyan et al. [V. Simonyan, J.K. Johnson, A Kuznetsova, J.T. Yates Jr., J. Chem. Phys. 114 (2001) 4180] where endohedral adsorption isotherms show a step-like structure. A matrix method is used for calculation of the statistical mechanics of a lattice model of xenon endohedral adsorption which reproduces the isotherm structure while exohedral adsorption is treated by mean-field theory.
Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model
D.P. Stotler
2005-06-09
The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.
NASA Astrophysics Data System (ADS)
Lin, T.; Ke, X.; Thesberg, M.; Schiffer, P.; Melko, R. G.; Gingras, M. J. P.
2014-12-01
Spin ice materials, such as Dy2Ti2O7 and Ho2Ti2O7 , are highly frustrated magnetic systems. Their low-temperature strongly correlated state can be mapped onto the proton disordered state of common water ice. As a result, spin ices display the same low-temperature residual Pauling entropy as water ice, at least in calorimetric experiments that are equilibrated over moderately long-time scales. It was found in a previous study [X. Ke et al., Phys. Rev. Lett. 99, 137203 (2007), 10.1103/PhysRevLett.99.137203] that, upon dilution of the magnetic rare-earth ions (Dy3 + and Ho3 +) by nonmagnetic yttrium (Y3 +) ions, the residual entropy depends nonmonotonically on the concentration of Y3 + ions. A quantitative description of the magnetic specific heat of site-diluted spin ice materials can be viewed as a further test aimed at validating the microscopic Hamiltonian description of these systems. In this work, we report results from Monte Carlo simulations of site-diluted microscopic dipolar spin ice models (DSIM) that account quantitatively for the experimental specific-heat measurements, and thus also for the residual entropy, as a function of dilution, for both Dy2 -xYxTi2O7 and Ho2 -xYxTi2O7 . The main features of the dilution physics displayed by the magnetic specific-heat data are quantitatively captured by the diluted DSIM up to 85% of the magnetic ions diluted (x =1.7 ). The previously reported departures in the residual entropy between Dy2 -xYxTi2O7 versus Ho2 -xYxTi2O7 , as well as with a site-dilution variant of Pauling's approximation, are thus rationalized through the site-diluted DSIM. We find for 90% (x =1.8 ) and 95% (x =1.9 ) of the magnetic ions diluted in Dy2 -xYxTi2O7 a significant discrepancy between the experimental and Monte Carlo specific-heat results. We discuss possible reasons for this disagreement.
Residual entropy of ice III from Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kolafa, Jiří
2016-03-01
We calculated the residual entropy of ice III as a function of the occupation probabilities of hydrogen positions α and β assuming equal energies of all configurations. To do this, a discrete ice model with Bjerrum defect energy penalty and harmonic terms to constrain the occupation probabilities was simulated by the Metropolis Monte Carlo method for a range of temperatures and sizes followed by thermodynamic integration and extrapolation to N = ∞. Similarly as for other ices, the residual entropies are slightly higher than the mean-field (no-loop) approximation. However, the corrections caused by fluctuation of energies of ice samples calculated using molecular models of water are too large for accurate determination of the chemical potential and phase equilibria.
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway.
Monte Carlo simulation in statistical physics: an introduction
NASA Astrophysics Data System (ADS)
Binder, K., Heermann, D. W.
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.
Monte Carlo simulation of the spear reflectometer at LANSCE
Smith, G.S.
1995-12-31
The Monte Carlo instrument simulation code, MCLIB, contains elements to represent several components found in neutron spectrometers including slits, choppers, detectors, sources and various samples. Using these elements to represent the components of a neutron scattering instrument, one can simulate, for example, an inelastic spectrometer, a small angle scattering machine, or a reflectometer. In order to benchmark the code, we chose to compare simulated data from the MCLIB code with an actual experiment performed on the SPEAR reflectometer at LANSCE. This was done by first fitting an actual SPEAR data set to obtain the model scattering-length-density profile, {Beta}(z), for the sample and the substrate. Then these parameters were used as input values for the sample scattering function. A simplified model of SPEAR was chosen which contained all of the essential components of the instrument. A code containing the MCLIB subroutines was then written to simulate this simplified instrument. The resulting data was then fit and compared to the actual data set in terms of the statistics, resolution and accuracy.
Kinetic Monte Carlo simulations of void lattice formation during irradiation
NASA Astrophysics Data System (ADS)
Heinisch, H. L.; Singh, B. N.
2003-11-01
Over the last decade, molecular dynamics simulations of displacement cascades have revealed that glissile clusters of self-interstitial crowdions are formed directly in cascades and that they migrate one-dimensionally along close-packed directions with extremely low activation energies. Occasionally, under various conditions, a crowdion cluster can change its Burgers vector and glide along a different close-packed direction. The recently developed production bias model (PBM) of microstructure evolution under irradiation has been structured specifically to take into account the unique properties of the vacancy and interstitial clusters produced in the cascades. Atomic-scale kinetic Monte Carlo (KMC) simulations have played a useful role in understanding the defect reaction kinetics of one-dimensionally migrating crowdion clusters as a function of the frequency of direction changes. This has made it possible to incorporate the migration properties of crowdion clusters and changes in reaction kinetics into the PBM. In the present paper we utilize similar KMC simulations to investigate the significant role that crowdion clusters can play in the formation and stability of void lattices. The creation of stable void lattices, starting from a random distribution of voids, is simulated by a KMC model in which vacancies migrate three-dimensionally and self-interstitial atom (SIA) clusters migrate one-dimensionally, interrupted by directional changes. The necessity of both one-dimensional migration and Burgers vectors changes of SIA clusters for the production of stable void lattices is demonstrated, and the effects of the frequency of Burgers vector changes are described.
In silico radiobiology: Have we reached the limit of Monte Carlo simulations?
NASA Astrophysics Data System (ADS)
Gholami, Y.; Toghyani, M.; Champion, C.; Kuncic, Z.
2014-03-01
Monte Carlo radiation transport models are increasingly being used to simulate biological damage. However, such radiation biophysics simulations require realistic molecular models for water, whereas existing Monte Carlo models are limited by their use of atomic cross-sections, which become inadequate for accurately modelling interactions of the very low-energy electrons that are responsible for biological damage. In this study, we borrow theoretical methods commonly employed in molecular dynamics simulations to model the molecular wavefunction of the water molecule as the first step towards deriving new molecular cross-sections. We calculate electron charge distributions for molecular water and find non-negligible differences between the vapor and liquid phases that can be attributed to intermolecular bonding in the condensed phase. We propose that a hybrid Monte Carlo - Molecular Dynamics (MC-MD) approach to modelling radiation biophysics will provide new insights into radiation damage and new opportunities to develop targeted molecular therapy strategies.
NASA Astrophysics Data System (ADS)
Sharma, Anupam; Long, Lyle N.
2004-10-01
A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.
Cluster growth processes by direct simulation monte carlo method
NASA Astrophysics Data System (ADS)
Mizuseki, H.; Jin, Y.; Kawazoe, Y.; Wille, L. T.
Thin films obtained by cluster deposition have attracted strong attention both as a new manufacturing technique to realize high-density magnetic recording media and to create systems with unique magnetic properties. Because the film's features are influenced by the cluster properties during the flight path, the relevant physical scale to be studied is as large as centimeters. In this paper, a new model of cluster growth processes based on a combination of the Direct Simulation Monte Carlo (DSMC) method and the cluster growth model is introduced to examine the effects of experimental conditions on cluster growth by an adiabatic expansion process. From the macroscopic viewpoint, we simulate the behavior of clusters and inert gas in the flight path under different experimental conditions. The internal energy of the cluster, which consists of rotational and vibrational energies, is limited by the binding energy which depends on the cluster size. These internal and binding energies are used as criteria of the cluster growth. The binding energy is estimated by surface and volume terms. Several types of size distribution of generated clusters under various conditions are obtained by the present model. The results of the present numerical simulations reveal that the size distribution is strongly related to the experimental conditions and can be controlled.
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
Lindoy, Lachlan P.; Kolmann, Stephen J.; D’Arcy, Jordan H.; Jordan, Meredith J. T.; Crittenden, Deborah L.
2015-11-21
Finite temperature quantum and anharmonic effects are studied in H{sub 2}–Li{sup +}-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H{sub 2}. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H{sub 2} molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔU{sub ads}, and enthalpy, ΔH{sub ads}, for H{sub 2} adsorption onto Li{sup +}-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H{sub 2}–Li{sup +}-benzene are the “helicopter” and “ferris wheel” H{sub 2} rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔU{sub ads} and ΔH{sub ads} are −13.3 ± 0.1 and −14.5 ± 0.1 kJ mol{sup −1}, respectively.
Monte Carlo simulation of radiation streaming from a radioactive material shipping cask
Liu, Y.Y.; Schwarz, R.A.; Tang, J.S.
1996-04-01
Simulated detection of gamma radiation streaming from a radioactive material shipping cask have been performed with the Monte Carlo codes MCNP4A and MORSE-SGC/S. Despite inherent difficulties in simulating deep penetration of radiation and streaming, the simulations have yielded results that agree within one order of magnitude with the radiation survey data, with reasonable statistics. These simulations have also provided insight into modeling radiation detection, notably on location and orientation of the radiation detector with respect to photon streaming paths, and on techniques used to reduce variance in the Monte Carlo calculations. 13 refs., 4 figs., 2 tabs.
NASA Astrophysics Data System (ADS)
Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.
2016-03-01
Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.
Treatment planning for a small animal using Monte Carlo simulation
Chow, James C. L.; Leung, Michael K. K.
2007-12-15
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Monte Carlo Simulations of Background Spectra in Integral Imager Detectors
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.
1998-01-01
Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.
A generic algorithm for Monte Carlo simulation of proton transport
NASA Astrophysics Data System (ADS)
Salvat, Francesc
2013-12-01
A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.
Magnetic properties for cobalt nanorings: Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Ye, Qingying; Chen, Shuiyuan; Zhong, Kehua; Huang, Zhigao
2012-02-01
In this paper, two structure models of cobalt nanoring cells (double-nanorings and four-nanorings, named as D-rings and F-rings, respectively) have been considered. Base on Monte Carlo simulation, the magnetic properties of the D-rings and F-rings, such as hysteresis loops, spin configuration, coercivity, etc., have been studied. The simulated results indicate that both D-rings and F-rings with different inner radius ( r) and separation of ring centers ( d) display interesting magnetization behavior and spin configurations (onion-, vortex- and crescent shape vortex-type states) in magnetization process. Moreover, it is found that the overlap between the nearest single nanorings connect can result in the deviation of the vortex-type states in the connected regions. Therefore, the appropriate d should be well considered in the design of nanoring device. The simulated results can be explained by the competition between exchange energy and dipolar energy in Co nanorings system. Furthermore, it is found that the simulated temperature dependence of the coercivity for the D-rings with different d can be well described by Hc= H0 exp[-( T/ T0) p].
Learning About Ares I from Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Hanson, John M.; Hall, Charlie E.
2008-01-01
This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.
Monte Carlo simulation of photon way in clinical laser therapy
NASA Astrophysics Data System (ADS)
Ionita, Iulian; Voitcu, Gabriel
2011-07-01
The multiple scattering of light can increase efficiency of laser therapy of inflammatory diseases enlarging the treated area. The light absorption is essential for treatment while scattering dominates. Multiple scattering effects must be introduced using the Monte Carlo method for modeling light transport in tissue and finally to calculate the optical parameters. Diffuse reflectance measurements were made on high concentrated live leukocyte suspensions in similar conditions as in-vivo measurements. The results were compared with the values determined by MC calculations, and the latter have been adjusted to match the specified values of diffuse reflectance. The principal idea of MC simulations applied to absorption and scattering phenomena is to follow the optical path of a photon through the turbid medium. The concentrated live cell solution is a compromise between homogeneous layer as in MC model and light-live cell interaction as in-vivo experiments. In this way MC simulation allow us to compute the absorption coefficient. The values of optical parameters, derived from simulation by best fitting of measured reflectance, were used to determine the effective cross section. Thus we can compute the absorbed radiation dose at cellular level.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
NASA Astrophysics Data System (ADS)
Liu, Zhirong; Chan, Hue Sun
2008-04-01
We develop two classes of Monte Carlo moves for efficient sampling of wormlike DNA chains that can have significant degrees of supercoiling, a conformational feature that is key to many aspects of biological function including replication, transcription, and recombination. One class of moves entails reversing the coordinates of a segment of the chain along one, two, or three axes of an appropriately chosen local frame of reference. These transformations may be viewed as a generalization, to the continuum, of the Madras-Orlitsky-Shepp algorithm for cubic lattices. Another class of moves, termed T±2, allows for interconversions between chains with different lengths by adding or subtracting two beads (monomer units) to or from the chain. Length-changing moves are generally useful for conformational sampling with a given site juxtaposition, as has been shown in previous lattice studies. Here, the continuum T±2 moves are designed to enhance their acceptance rate in supercoiled conformations. We apply these moves to a wormlike model in which excluded volume is accounted for by a bond-bond repulsion term. The computed autocorrelation functions for the relaxation of bond length, bond angle, writhe, and branch number indicate that the new moves lead to significantly more efficient sampling than conventional bead displacements and crankshaft rotations. A close correspondence is found in the equilibrium ensemble between the map of writhe computed for pair of chain segments and the map of site juxtapositions or self-contacts. To evaluate the more coarse-grained freely jointed chain (random-flight) and cubic lattice models that are commonly used in DNA investigations, twisting (torsional) potentials are introduced into these models. Conformational properties for a given superhelical density σ may then be sampled by computing the writhe and using White's formula to relate the degree of twisting to writhe and σ. Extensive comparisons of contact patterns and knot probabilities
Liu, Zhirong; Chan, Hue Sun
2008-04-14
We develop two classes of Monte Carlo moves for efficient sampling of wormlike DNA chains that can have significant degrees of supercoiling, a conformational feature that is key to many aspects of biological function including replication, transcription, and recombination. One class of moves entails reversing the coordinates of a segment of the chain along one, two, or three axes of an appropriately chosen local frame of reference. These transformations may be viewed as a generalization, to the continuum, of the Madras-Orlitsky-Shepp algorithm for cubic lattices. Another class of moves, termed T+/-2, allows for interconversions between chains with different lengths by adding or subtracting two beads (monomer units) to or from the chain. Length-changing moves are generally useful for conformational sampling with a given site juxtaposition, as has been shown in previous lattice studies. Here, the continuum T+/-2 moves are designed to enhance their acceptance rate in supercoiled conformations. We apply these moves to a wormlike model in which excluded volume is accounted for by a bond-bond repulsion term. The computed autocorrelation functions for the relaxation of bond length, bond angle, writhe, and branch number indicate that the new moves lead to significantly more efficient sampling than conventional bead displacements and crankshaft rotations. A close correspondence is found in the equilibrium ensemble between the map of writhe computed for pair of chain segments and the map of site juxtapositions or self-contacts. To evaluate the more coarse-grained freely jointed chain (random-flight) and cubic lattice models that are commonly used in DNA investigations, twisting (torsional) potentials are introduced into these models. Conformational properties for a given superhelical density sigma may then be sampled by computing the writhe and using White's formula to relate the degree of twisting to writhe and sigma. Extensive comparisons of contact patterns and knot
Relation between gamma-ray family and EAS core: Monte-Carlo simulation of EAS core
NASA Technical Reports Server (NTRS)
Yanagita, T.
1985-01-01
Preliminary results of Monte-Carlo simulation on Extensive Air Showers (EAS) (Ne=100,000) core is reported. For the first collision at the top of the atmosphere, high multiplicity (high rapidity, density) and a large Pt (1.5GeV average) model is assumed. Most of the simulated cores show a complicated structure.
Monte Carlo Simulation of Callisto's Exosphere
NASA Astrophysics Data System (ADS)
Vorburger, Audrey; Wurz, Peter; Galli, André; Mousis, Olivier; Barabash, Stas; Lammer, Helmut
2014-05-01
to the surface the sublimated particles dominate the day-side exosphere, however, their density profiles (with the exception of H and H2) decrease much more rapidly with altitude than those of the sputtered particles, thus, the latter particles start to dominate at altitudes above ~1000 km. Since the JUICE flybys are as low as 200 km above Callisto's surface, NIM is expected to register both the sublimated as well as sputtered particle populations. Our simulations show that NIM's sensitivity is high enough to allow the detection of particles sputtered from the icy as well as the mineral surfaces, and to distinguish between the different composition models.
Cai, Zhongli; Chattopadhyay, Niladri; Kwon, Yongkyu Luke; Pignol, Jean-Philippe; Lechtman, Eli; Reilly, Raymond M.
2013-11-15
Purpose: The authors’ aims were to model how various factors influence radiation dose enhancement by gold nanoparticles (AuNPs) and to propose a new modeling approach to the dose enhancement factor (DEF).Methods: The authors used Monte Carlo N-particle (MCNP 5) computer code to simulate photon and electron transport in cells. The authors modeled human breast cancer cells as a single cell, a monolayer, or a cluster of cells. Different numbers of 5, 30, or 50 nm AuNPs were placed in the extracellular space, on the cell surface, in the cytoplasm, or in the nucleus. Photon sources examined in the simulation included nine monoenergetic x-rays (10–100 keV), an x-ray beam (100 kVp), and {sup 125}I and {sup 103}Pd brachytherapy seeds. Both nuclear and cellular dose enhancement factors (NDEFs, CDEFs) were calculated. The ability of these metrics to predict the experimental DEF based on the clonogenic survival of MDA-MB-361 human breast cancer cells exposed to AuNPs and x-rays were compared.Results: NDEFs show a strong dependence on photon energies with peaks at 15, 30/40, and 90 keV. Cell model and subcellular location of AuNPs influence the peak position and value of NDEF. NDEFs decrease in the order of AuNPs in the nucleus, cytoplasm, cell membrane, and extracellular space. NDEFs also decrease in the order of AuNPs in a cell cluster, monolayer, and single cell if the photon energy is larger than 20 keV. NDEFs depend linearly on the number of AuNPs per cell. Similar trends were observed for CDEFs. NDEFs using the monolayer cell model were more predictive than either single cell or cluster cell models of the DEFs experimentally derived from the clonogenic survival of cells cultured as a monolayer. The amount of AuNPs required to double the prescribed dose in terms of mg Au/g tissue decreases as the size of AuNPs increases, especially when AuNPs are in the nucleus and the cytoplasm. For 40 keV x-rays and a cluster of cells, to double the prescribed x-ray dose (NDEF = 2
Shell model the Monte Carlo way
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Monte Carlo Simulations of Cosmic Rays Hadronic Interactions
Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.
2011-04-01
This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.
An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media
NASA Astrophysics Data System (ADS)
Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu
2016-03-01
Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
Monte Carlo simulations of ionization potential depression in dense plasmas
NASA Astrophysics Data System (ADS)
Stransky, M.
2016-01-01
A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.
Monte Carlo simulation algorithm for B-DNA.
Howell, Steven C; Qiu, Xiangyun; Curtis, Joseph E
2016-11-01
Understanding the structure-function relationship of biomolecules containing DNA has motivated experiments aimed at determining molecular structure using methods such as small-angle X-ray and neutron scattering (SAXS and SANS). SAXS and SANS are useful for determining macromolecular shape in solution, a process which benefits by using atomistic models that reproduce the scattering data. The variety of algorithms available for creating and modifying model DNA structures lack the ability to rapidly modify all-atom models to generate structure ensembles. This article describes a Monte Carlo algorithm for simulating DNA, not with the goal of predicting an equilibrium structure, but rather to generate an ensemble of plausible structures which can be filtered using experimental results to identify a sub-ensemble of conformations that reproduce the solution scattering of DNA macromolecules. The algorithm generates an ensemble of atomic structures through an iterative cycle in which B-DNA is represented using a wormlike bead-rod model, new configurations are generated by sampling bend and twist moves, then atomic detail is recovered by back mapping from the final coarse-grained configuration. Using this algorithm on commodity computing hardware, one can rapidly generate an ensemble of atomic level models, each model representing a physically realistic configuration that could be further studied using molecular dynamics. © 2016 Wiley Periodicals, Inc. PMID:27671358
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Monte Carlo code for high spatial resolution ocean color simulations.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito; Cunha, José C
2010-09-10
A Monte Carlo code for ocean color simulations has been developed to model in-water radiometric fields of downward and upward irradiance (E(d) and E(u)), and upwelling radiance (L(u)) in a two-dimensional domain with a high spatial resolution. The efficiency of the code has been optimized by applying state-of-the-art computing solutions, while the accuracy of simulation results has been quantified through benchmark with the widely used Hydrolight code for various values of seawater inherent optical properties and different illumination conditions. Considering a seawater single scattering albedo of 0.9, as well as surface waves of 5 m width and 0.5 m height, the study has shown that the number of photons required to quantify uncertainties induced by wave focusing effects on E(d), E(u), and L(u) data products is of the order of 10(6), 10(9), and 10(10), respectively. On this basis, the effects of sea-surface geometries on radiometric quantities have been investigated for different surface gravity waves. Data products from simulated radiometric profiles have finally been analyzed as a function of the deployment speed and sampling frequency of current free-fall systems in view of providing recommendations to improve measurement protocols.
Monte Carlo code for high spatial resolution ocean color simulations.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito; Cunha, José C
2010-09-10
A Monte Carlo code for ocean color simulations has been developed to model in-water radiometric fields of downward and upward irradiance (E(d) and E(u)), and upwelling radiance (L(u)) in a two-dimensional domain with a high spatial resolution. The efficiency of the code has been optimized by applying state-of-the-art computing solutions, while the accuracy of simulation results has been quantified through benchmark with the widely used Hydrolight code for various values of seawater inherent optical properties and different illumination conditions. Considering a seawater single scattering albedo of 0.9, as well as surface waves of 5 m width and 0.5 m height, the study has shown that the number of photons required to quantify uncertainties induced by wave focusing effects on E(d), E(u), and L(u) data products is of the order of 10(6), 10(9), and 10(10), respectively. On this basis, the effects of sea-surface geometries on radiometric quantities have been investigated for different surface gravity waves. Data products from simulated radiometric profiles have finally been analyzed as a function of the deployment speed and sampling frequency of current free-fall systems in view of providing recommendations to improve measurement protocols. PMID:20830183
PENEPMA: a Monte Carlo programme for the simulation of X-ray emission in EPMA
NASA Astrophysics Data System (ADS)
Llovet, X.; Salvat, F.
2016-02-01
The Monte Carlo programme PENEPMA performs simulations of X-ray emission from samples bombarded with electron beams. It is both based on the general-purpose Monte Carlo simulation package PENELOPE, an elaborate system for the simulation of coupled electron-photon transport in arbitrary materials, and on the geometry subroutine package PENGEOM, which tracks particles through complex material structures defined by quadric surfaces. In this work, we give a brief overview of the capabilities of the latest version of PENEPMA along with several examples of its application to the modelling of electron probe microanalysis measurements.
Monte Carlo Simulation of Solar Reflectances for Cloudy Atmospheres.
NASA Astrophysics Data System (ADS)
Barker, H. W.; Goldstein, R. K.; Stevens, D. E.
2003-08-01
Monte Carlo simulations of solar radiative transfer were performed for a well-resolved, large, three-dimensional (3D) domain of boundary layer cloud simulated by a cloud-resolving model. In order to represent 3D distributions of optical properties for 2 × 106 cloudy cells, attenuation by droplets was handled by assigning each cell a cumulative distribution of extinction derived from either a model or an assumed discrete droplet size spectrum. This minimizes the required number of detailed phase functions. Likewise, to simulate statistically significant, high-resolution imagery, it was necessary to apply variance reduction techniques. Three techniques were developed for use with the local estimation method of computing reflectance . First, small fractions of come from numerous, small contributions of computed at each scattering event. Terminating calculation of when it falls below min 103 was found to impact estimates of minimally but reduced computation time by 10%. Second, large fractions of come from infrequent realizations of large . When sampled poorly, they boost Monte Carlo noise significantly. Removing max, storing them in a domainwide reservoir, adding max to local estimates of , and, at simulation's end, distributing the reservoir across the domain in proportion to local , tends to reduce variance much. This regionalization technique works well when the number of photons per unit area is small (nominally 50 000). A value of max 100 reduces variance of greatly with little impact on estimates of . Third, if
Recent developments in quantum Monte Carlo simulations with applications for cold gases.
Pollet, Lode
2012-09-01
This is a review of recent developments in Monte Carlo methods in the field of ultracold gases. For bosonic atoms in an optical lattice we discuss path-integral Monte Carlo simulations with worm updates and show the excellent agreement with cold atom experiments. We also review recent progress in simulating bosonic systems with long-range interactions, disordered bosons, mixtures of bosons and spinful bosonic systems. For repulsive fermionic systems, determinantal methods at half filling are sign free, but in general no sign-free method exists. We review the developments in diagrammatic Monte Carlo for the Fermi polaron problem and the Hubbard model, and show the connection with dynamical mean-field theory. We end the review with diffusion Monte Carlo for the Stoner problem in cold gases.
A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems
NASA Astrophysics Data System (ADS)
Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.
2001-06-01
We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.
Self-Consistent Monte Carlo Simulations of Positive Column Discharges
NASA Astrophysics Data System (ADS)
Lawler, J. E.; Kortshagen, U.
1998-10-01
In recent years it has become widely recognized that electron distribution functions in atomic gas positive column discharges are best described as non local over most of the range of R× N (column radius × gas density) where positive columns are stable. The use of an efficient Monte Carlo code with a radial potential expansion in powers of r^2 and with judiciously chosen constraints on the potential near the axis and wall now provides fully self-consistent kinetic solutions using only small computers. A set of solutions at smaller R× N and lower currents are presented which exhibit the classic negative dynamic resistance of the positive column at low currents. The negative dynamic resistance is due to a non-negligible Debye length and is sometimes described as a transition from free to ambipolar diffusion. This phenomenon is sensitive to radial variations of key parameters in the positive column and thus kinetic theory simulations are likely to provide a more realistic description than classic isothermal fluid models of the positive column. Comparisons of kinetic theory simulations to various fluid models of the positive column continue to provide new insight on this `corner stone' problem of Gaseous Electronics.
Monte Carlo simulations of microgap gas-filled proportional counters
NASA Astrophysics Data System (ADS)
Kundu, Ashoke; Morton, Edward J.; Key, Martyn J.; Luggar, Russell D.
1999-09-01
Monte Carlo calculations have been widely employed to model the interactions of electrons and photons as they travel through and collide with matter. This approach has been applied with some success to the problem of simulating the response of gas-filled proportional counters, mapping out electron transport through the electric field on an interaction-by-interaction basis. These studies focus on the multiplication of electrons as they drift into the high electric field region of the detector and subsequently avalanche. We are using this technique in our new simulation code to depict avalanching in microgap gas-filled proportional counters, in order to investigate the variation of two principle detector properties with the anode pitch used in the detector. Spatial resolution information can be obtained by measuring the lateral diffusion distance of an electron from the point where it is liberated to the point in the detector where it initiates an avalanche. By also modeling the motion of the positive ions that are left behind from the initial avalanche, we are able to gauge the effect of space charge distortion on subsequent avalanches. This effect is particularly important at the high X-ray count rates that we are interested in for our ultimate aim, which is to use the detectors as part of a high-speed tomography system for imaging multiphase oil/water/gas flows.
Microbial contamination in poultry chillers estimated by Monte Carlo simulations
Technology Transfer Automated Retrieval System (TEKTRAN)
The risk of microbial contamination during poultry processing may be reduced by the operating characteristics of the chiller. The performance of air chillers and immersion chillers were compared in terms of pre-chill and post-chill contamination using Monte Carlo simulations. Three parameters were u...
Monte Carlo Simulations of Light Propagation in Apples
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...
Quantum Monte Carlo simulation with a black hole
NASA Astrophysics Data System (ADS)
Benić, Sanjin; Yamamoto, Arata
2016-05-01
We perform quantum Monte Carlo simulations in the background of a classical black hole. The lattice discretized path integral is numerically calculated in the Schwarzschild metric and in its approximated metric. We study spontaneous symmetry breaking of a real scalar field theory. We observe inhomogeneous symmetry breaking induced by an inhomogeneous gravitational field.
Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation
ERIC Educational Resources Information Center
Silver, N. Clayton; Hittner, James B.; May, Kim
2004-01-01
The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…
Monte Carlo ICRH simulations in fully shaped anisotropic plasmas
Jucker, M.; Graves, J. P.; Cooper, W. A.; Mellet, N.; Brunner, S.
2008-11-01
In order to numerically study the effects of Ion Cyclotron Resonant Heating (ICRH) on the fast particle distribution function in general plasma geometries, three codes have been coupled: VMEC generates a general (2D or 3D) MHD equilibrium including full shaping and pressure anisotropy. This equilibrium is then mapped into Boozer coordinates. The full-wave code LEMan then calculates the power deposition and electromagnetic field strength of a wave field generated by a chosen antenna using a warm model. Finally, the single particle Hamiltonian code VENUS combines the outputs of the two previous codes in order to calculate the evolution of the distribution function. Within VENUS, Monte Carlo operators for Coulomb collisions of the fast particles with the background plasma have been implemented, accounting for pitch angle and energy scattering. Also, ICRH is simulated using Monte Carlo operators on the Doppler shifted resonant layer. The latter operators act in velocity space and induce a change of perpendicular and parallel velocity depending on the electric field strength and the corresponding wave vector. Eventually, the change in the distribution function will then be fed into VMEC for generating a new equilibrium and thus a self-consistent solution can be found. This model is an enhancement of previous studies in that it is able to include full 3D effects such as magnetic ripple, treat the effects of non-zero orbit width consistently and include the generation and effects of pressure anisotropy. Here, first results of coupling the three codes will be shown in 2D tokamak geometries.
Thomas, R S; Yang, R S; Morgan, D G; Moorman, M P; Kermani, H R; Sloane, R A; O'Connor, R W; Adkins, B; Gargas, M L; Andersen, M E
1996-01-01
During a 2-year chronic inhalation study on methylene chloride (2000 or 0 ppm; 6 hr/day, 5 days/week), gas-uptake pharmacokinetic studies and tissue partition coefficient determinations were conducted on female B6C3F1, mice after 1 day, 1 month, 1 year, and 2 years of exposure. Using physiologically based pharmacokinetic (PBPK) modeling coupled with Monte Carlo simulation and bootstrap resampling for data analyses, a significant induction in the mixed function oxidase (MFO) rate constant (Vmaxc) was observed at the 1-day and 1-month exposure points when compared to concurrent control mice while decreases in glutathione S-transferase (GST) rate constant (Kfc) were observed in the 1-day and 1-month exposed mice. Within exposure groups, the apparent Vmaxc maintained significant increases in the 1-month and 2-year control groups. Although the same initial increase exists in the exposed group, the 2-year Vmaxc is significantly smaller than the 1-month group (p < 0.001). Within group differences in median Kfc values show a significant decrease in both 1-month and 2-year groups among control and exposed mice (p < 0.001). Although no changes in methylene chloride solubility as a result of prior exposure were observed in blood, muscle, liver, or lung, a marginal decrease in the fat:air partition coefficient was found in the exposed mice at p = 0.053. Age related solubility differences were found in muscle:air, liver:air, lung:air, and fat:air partition coefficients at p < 0.001, while the solubility of methylene chloride in blood was not affected by age (p = 0.461). As a result of this study, we conclude that age and prior exposure to methylene chloride can produce notable changes in disposition and metabolism and may represent important factors in the interpretation for toxicologic data and its application to risk assessment. Images Figure 1. Figure 2. Figure 3. Figure 4. Figure 4. Figure 4. Figure 4. Figure 5. Figure 5. Figure 5. Figure 5. PMID:8875160
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Linac Coherent Light Source Monte Carlo Simulation
2006-03-15
This suite consists of codes to generate an initial x-ray photon distribution and to propagate the photons through various objects. The suite is designed specifically for simulating the Linac Coherent Light Source, and x-ray free electron laser (XFEL) being built at the Stanford Linear Accelerator Center. The purpose is to provide sufficiently detailed characteristics of the laser to engineers who are designing the laser diagnostics.
Radiation response of inorganic scintillators: Insights from Monte Carlo simulations
Prange, Micah P.; Wu, Dangxin; Xie, YuLong; Campbell, Luke W.; Gao, Fei; Kerisit, Sebastien N.
2014-07-24
The spatial and temporal scales of hot particle thermalization in inorganic scintillators are critical factors determining the extent of second- and third-order nonlinear quenching in regions with high densities of electron-hole pairs, which, in turn, leads to the light yield nonproportionality observed, to some degree, for all inorganic scintillators. Therefore, kinetic Monte Carlo simulations were performed to calculate the distances traveled by hot electrons and holes as well as the time required for the particles to reach thermal energy following γ-ray irradiation. CsI, a common scintillator from the alkali halide class of materials, was used as a model system. Two models of quasi-particle dispersion were evaluated, namely, the effective mass approximation model and a model that relied on the group velocities of electrons and holes determined from band structure calculations. Both models predicted rapid electron-hole pair recombination over short distances (a few nanometers) as well as a significant extent of charge separation between electrons and holes that did not recombine and reached thermal energy. However, the effective mass approximation model predicted much longer electron thermalization distances and times than the group velocity model. Comparison with limited experimental data suggested that the group velocity model provided more accurate predictions. Nonetheless, both models indicated that hole thermalization is faster than electron thermalization and thus is likely to be an important factor determining the extent of third-order nonlinear quenching in high-density regions. The merits of different models of quasi-particle dispersion are also discussed.
Simulating rotationally inelastic collisions using a direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Schullian, O.; Loreau, J.; Vaeck, N.; van der Avoird, A.; Heazlewood, B. R.; Rennick, C. J.; Softley, T. P.
2015-12-01
A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH3 by He.
Diffuse photon density wave measurements and Monte Carlo simulations.
Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A
2015-10-01
Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.
Direct Simulation Monte Carlo Simulations of Low Pressure Semiconductor Plasma Processing
Gochberg, L. A.; Ozawa, T.; Deng, H.; Levin, D. A.
2008-12-31
The two widely used plasma deposition tools for semiconductor processing are Ionized Metal Physical Vapor Deposition (IMPVD) of metals using either planar or hollow cathode magnetrons (HCM), and inductively-coupled plasma (ICP) deposition of dielectrics in High Density Plasma Chemical Vapor Deposition (HDP-CVD) reactors. In these systems, the injected neutral gas flows are generally in the transonic to supersonic flow regime. The Hybrid Plasma Equipment Model (HPEM) has been developed and is strategically and beneficially applied to the design of these tools and their processes. For the most part, the model uses continuum-based techniques, and thus, as pressures decrease below 10 mTorr, the continuum approaches in the model become questionable. Modifications have been previously made to the HPEM to significantly improve its accuracy in this pressure regime. In particular, the Ion Monte Carlo Simulation (IMCS) was added, wherein a Monte Carlo simulation is used to obtain ion and neutral velocity distributions in much the same way as in direct simulation Monte Carlo (DSMC). As a further refinement, this work presents the first steps towards the adaptation of full DSMC calculations to replace part of the flow module within the HPEM. Six species (Ar, Cu, Ar*, Cu*, Ar{sup +}, and Cu{sup +}) are modeled in DSMC. To couple SMILE as a module to the HPEM, source functions for species, momentum and energy from plasma sources will be provided by the HPEM. The DSMC module will then compute a quasi-converged flow field that will provide neutral and ion species densities, momenta and temperatures. In this work, the HPEM results for a hollow cathode magnetron (HCM) IMPVD process using the Boltzmann distribution are compared with DSMC results using portions of those HPEM computations as an initial condition.
Monte Carlo simulation of breast imaging using synchrotron radiation
Fitousi, N. T.; Delis, H.; Panayiotakis, G.
2012-04-15
Purpose: Synchrotron radiation (SR), being the brightest artificial source of x-rays with a very promising geometry, has raised the scientific expectations that it could be used for breast imaging with optimized results. The ''in situ'' evaluation of this technique is difficult to perform, mostly due to the limited available SR facilities worldwide. In this study, a simulation model for SR breast imaging was developed, based on Monte Carlo simulation techniques, and validated using data acquired in the SYRMEP beamline of the Elettra facility in Trieste, Italy. Furthermore, primary results concerning the performance of SR were derived. Methods: The developed model includes the exact setup of the SR beamline, considering that the x-ray source is located at almost 23 m from the slit, while the photon energy was considered to originate from a very narrow Gaussian spectrum. Breast phantoms, made of Perspex and filled with air cavities, were irradiated with energies in the range of 16-28 keV. The model included a Gd{sub 2}O{sub 2}S detector with the same characteristics as the one available in the SYRMEP beamline. Following the development and validation of the model, experiments were performed in order to evaluate the contrast resolution of SR. A phantom made of adipose tissue and filled with inhomogeneities of several compositions and sizes was designed and utilized to simulate the irradiation under conventional mammography and SR conditions. Results: The validation results of the model showed an excellent agreement with the experimental data, with the correlation for contrast being 0.996. Significant differences only appeared at the edges of the phantom, where phase effects occur. The initial evaluation experiments revealed that SR shows very good performance in terms of the image quality indices utilized, namely subject contrast and contrast to noise ratio. The response of subject contrast to energy is monotonic; however, this does not stand for contrast to noise
Numerical study of error propagation in Monte Carlo depletion simulations
Wyant, T.; Petrovic, B.
2012-07-01
Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)
Monte Carlo simulations of single crystals from polymer solutions
NASA Astrophysics Data System (ADS)
Zhang, Jianing; Muthukumar, M.
2007-06-01
A novel "anisotropic aggregation" model is proposed to simulate nucleation and growth of polymer single crystals as functions of temperature and polymer concentration in dilute solutions. Prefolded chains in a dilute solution are assumed to aggregate at a seed nucleus with an anisotropic interaction by a reversible adsorption/desorption mechanism, with temperature, concentration, and seed size being the control variables. The Monte Carlo results of this model resolve the long-standing dilemma regarding the kinetic and thermal roughenings, by producing a rough-flat-rough transition in the crystal morphology with increasing temperature. It is found that the crystal growth rate varies nonlinearly with temperature and concentration without any marked transitions among any regimes of polymer crystallization kinetics. The induction time increases with decreasing the seed nucleus size, increasing temperature, or decreasing concentration. The apparent critical nucleus size is found to increase exponentially with increasing temperature or decreasing concentration, leading to a critical nucleus diagram composed in the temperature-concentration plane with three regions of different nucleation barriers: no growth, nucleation and growth, and spontaneous growth. Melting temperatures as functions of the crystal size, heating rate, and concentration are also reported. The present model, falling in the same category of small molecular crystallization with anisotropic interactions, captures most of the phenomenology of polymer crystallization in dilute solutions.
Monte Carlo modeling of exospheric bodies - Mercury
NASA Technical Reports Server (NTRS)
Smith, G. R.; Broadfoot, A. L.; Wallace, L.; Shemansky, D. E.
1978-01-01
In order to study the interaction with the surface, a Monte Carlo program is developed to determine the distribution with altitude as well as the global distribution of density at the surface in a single operation. The analysis presented shows that the appropriate source distribution should be Maxwell-Boltzmann flux if the particles in the distribution are to be treated as components of flux. Monte Carlo calculations with a Maxwell-Boltzmann flux source are compared with Mariner 10 UV spectrometer data. Results indicate that the presently operating models are not capable of fitting the observed Mercury exosphere. It is suggested that an atmosphere calculated with a barometric source distribution is suitable for more realistic future exospheric models.
Monte Carlo simulations of sexual reproduction
NASA Astrophysics Data System (ADS)
Stauffer, D.; de Oliveira, P. M. C.; de Oliveira, S. Moss; dos Santos, R. M. Zorzenon
1996-02-01
Modifying the Redfield model of sexual reproduction and the Penna model of biological aging, we compare reproduction with and without recombination in age-structured populations. In constrast to Redfield and in agreement with Bernardes we find sexual reproduction to be preferred to asexual one. In particular, the presence of old but still reproducing males helps the survival of younger females beyond their reproductive age.
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-15
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
NASA Astrophysics Data System (ADS)
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-01
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
A semianalytic Monte Carlo code for modelling LIDAR measurements
NASA Astrophysics Data System (ADS)
Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio
2007-10-01
LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.
Monte Carlo modelling of positron transport in real world applications
NASA Astrophysics Data System (ADS)
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
A new lattice Monte Carlo method for simulating dielectric inhomogeneity
NASA Astrophysics Data System (ADS)
Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei
We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.
Zhang, P; Wang, H Y; Li, Y G; Mao, S F; Ding, Z J
2012-01-01
Monte Carlo simulation methods for the study of electron beam interaction with solids have been mostly concerned with specimens of simple geometry. In this article, we propose a simulation algorithm for treating arbitrary complex structures in a real sample. The method is based on a finite element triangular mesh modeling of sample geometry and a space subdivision for accelerating simulation. Simulation of secondary electron image in scanning electron microscopy has been performed for gold particles on a carbon substrate. Comparison of the simulation result with an experiment image confirms that this method is effective to model complex morphology of a real sample.
Monte Carlo simulation of HERD calorimeter
NASA Astrophysics Data System (ADS)
Xu, M.; Chen, G. M.; Dong, Y. W.; Lu, J. G.; Quan, Z.; Wang, L.; Wang, Z. G.; Wu, B. B.; Zhang, S. N.
2014-07-01
The High Energy cosmic-Radiation Detection (HERD) facility onboard China's Space Station is planned for operation starting around 2020 for about 10 years. It is designed as a next generation space facility focused on indirect dark matter search, precise cosmic ray spectrum and composition measurements up to the knee energy, and high energy gamma-ray monitoring and survey. The calorimeter plays an essential role in the main scientific objectives of HERD. A 3-D cubic calorimeter filled with high granularity crystals as active material is a very promising choice for the calorimeter. HERD is mainly composed of a 3-D calorimeter (CALO) surrounded by silicon trackers (TK) from all five sides except the bottom. CALO is made of 9261 cubes of LYSO crystals, corresponding to about 55 radiation lengths and 3 nuclear interaction lengths, respectively. Here the simulation results of the performance of CALO with GEANT4 and FLUKA are presented: 1) the total absorption CALO and its absorption depth for precise energy measurements (energy resolution: 1% for electrons and gammarays beyond 100 GeV, 20% for protons from 100 GeV to 1 PeV); 2) its granularity for particle identification (electron/proton separation power better than 10-5); 3) the homogenous geometry for detecting particles arriving from every unblocked direction for large effective geometrical factor (<3 m2sr for electron and diffuse gammarays, >2 m2sr for cosmic ray nuclei); 4) expected observational results such as gamma-ray line spectrum from dark matter annihilation and spectrum measurement of various cosmic ray chemical components.
Monte Carlo simulation with fixed steplength for diffusion processes in nonhomogeneous media
NASA Astrophysics Data System (ADS)
Ruiz Barlett, V.; Hoyuelos, M.; Mártin, H. O.
2013-04-01
Monte Carlo simulation is one of the most important tools in the study of diffusion processes. For constant diffusion coefficients, an appropriate Gaussian distribution of particle's steplengths can generate exact results, when compared with integration of the diffusion equation. It is important to notice that the same method is completely erroneous when applied to non-homogeneous diffusion coefficients. A simple alternative, jumping at fixed steplengths with appropriate transition probabilities, produces correct results. Here, a model for diffusion of calcium ions in the neuromuscular junction of the crayfish is used as a test to compare Monte Carlo simulation with fixed and Gaussian steplength.
A Monte Carlo simulation approach for flood risk assessment
NASA Astrophysics Data System (ADS)
Agili, Hachem; Chokmani, Karem; Oubennaceur, Khalid; Poulin, Jimmy; Marceau, Pascal
2016-04-01
Floods are the most frequent natural disaster and the most damaging in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which hit the region in 2011 causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects of buildings has been developed. This approach integrates three main components namely hydrological modeling through flow-probability functions, hydraulic modeling using flow-submersion height functions and the study of buildings damage based on damage functions adapted to the Quebec habitat. The application of this approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for local authorities to support their decisions on risk management and prevention against this disaster.
Monte Carlo simulations of polyelectrolytes inside viral capsids
NASA Astrophysics Data System (ADS)
Angelescu, Daniel George; Bruinsma, Robijn; Linse, Per
2006-04-01
Structural features of polyelectrolytes as single-stranded RNA or double-stranded DNA confined inside viral capsids and the thermodynamics of the encapsidation of the polyelectrolyte into the viral capsid have been examined for various polyelectrolyte lengths by using a coarse-grained model solved by Monte Carlo simulations. The capsid was modeled as a spherical shell with embedded charges and the genome as a linear jointed chain of oppositely charged beads, and their sizes corresponded to those of a scaled-down T=3 virus. Counterions were explicitly included, but no salt was added. The encapisdated chain was found to be predominantly located at the inner capsid surface, in a disordered manner for flexible chains and in a spool-like structure for stiff chains. The distribution of the small ions was strongly dependent on the polyelectrolyte-capsid charge ratio. The encapsidation enthalpy was negative and its magnitude decreased with increasing polyelectrolyte length, whereas the encapsidation entropy displayed a maximum when the capsid and polyelectrolyte had equal absolute charge. The encapsidation process remained thermodynamically favorable for genome charges ca. 3.5 times the capsid charge. The chain stiffness had only a relatively weak effect on the thermodynamics of the encapsidation.
Monte Carlo simulation of the Neutrino-4 experiment
Serebrov, A. P. Fomin, A. K.; Onegin, M. S.; Ivochkin, V. G.; Matrosov, L. N.
2015-12-15
Monte Carlo simulation of the two-section reactor antineutrino detector of the Neutrino-4 experiment is carried out. The scintillation-type detector is based on the inverse beta-decay reaction. The antineutrino is recorded by two successive signals from the positron and the neutron. The simulation of the detector sections and the active shielding is performed. As a result of the simulation, the distributions of photomultiplier signals from the positron and the neutron are obtained. The efficiency of the detector depending on the signal recording thresholds is calculated.
NASA Astrophysics Data System (ADS)
Firanescu, George; Luckhaus, David; Patey, Grenfell N.; Atreya, Sushil K.; Signorell, Ruth
2011-04-01
Molecular level Monte Carlo simulations have been performed with various model potentials for the CH 4-N 2 vapor-liquid equilibrium at conditions prevalent in the atmosphere of Saturn's moon Titan. With a single potential parameter adjustment to reproduce the vapor-liquid equilibrium at a higher temperature, Monte Carlo simulations are in excellent agreement with available laboratory measurements. The results demonstrate the ability of simple pair potential models to describe phase equilibria with the requisite accuracy for atmospheric modeling, while keeping the number of adjustable parameters at a minimum. This allows for stable extrapolation beyond the range of available laboratory measurements into the supercooled region of the phase diagram, so that Monte Carlo simulations can serve as a reference to validate phenomenological models commonly used in atmospheric modeling. This is most important when the relevant region of the phase diagram lies outside the range of laboratory measurements as in the case of Titan. The present Monte Carlo simulations confirm the validity of phenomenological thermodynamic equations of state specifically designed for application to Titan. The validity extends well into the supercooled region of the phase diagram. The possible range of saturation levels of Titan's troposphere above altitudes of 7 km is found to be completely determined by the remaining uncertainty of the most recent revision of the Cassini-Huygens data, yielding a saturation of 100 ± 6% with respect to CH 4-N 2 condensation up to an altitude of about 20 km.
Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.
Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L
2003-02-01
Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney. PMID:12667310
Parallel domain decomposition methods in fluid models with Monte Carlo transport
Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.
1996-12-01
To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
NASA Astrophysics Data System (ADS)
Robl, Jörg; Hergarten, Stefan
2015-04-01
Debris flows are globally abundant threats for settlements and infrastructure in mountainous regions. Crucial influencing factors for hazard zone planning and mitigation strategies are based on numerical models that describe granular flow on general topography by solving a depth-averaged form of the Navier Stokes equations in combination with an appropriate flow resistance law. In case of debris flows, the Voellmy rheology is a widely used constitutive law describing the flow resistance. It combines a velocity independent Coulomb friction term with a term proportional to the square of the velocity as it is commonly used for turbulent flow. Parameters of the Vollemy fluid are determined by back analysis from observed events so that modelled events mimic their historical counterparts. Determined parameters characterizing individual debris flows show a large variability (related to fluid composition and surface roughness). However, there may be several sets of parameters that lead to a similar depositional pattern but cause large differences in flow velocity and momentum along the flow path. Fluid volumes of hazardous debris flows are estimated by analyzing historic events, precipitation time series, hydrographs or empirical relationships that correlate fluid volumes and drainage areas of torrential catchments. Beside uncertainties in the determination of the fluid volume the position and geometry of the initial masses of forthcoming debris flows are in general not well constrained but heavily influence the flow dynamics and the depositional pattern even in the run-out zones. In this study we present a new, freely available numerical description of rapid mass movements based on the GERRIS framework and early results of a Monte Carlo simulation exploring effects of the aforementioned parameters on run-out distance, inundated area and momentum. The novel numerical model describes rapid mass movements on complex topography using the shallow water equations in Cartesian
Monte Carlo Simulations on a 9-node PC Cluster
NASA Astrophysics Data System (ADS)
Gouriou, J.
Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated.
Computer Monte Carlo simulation in quantitative resource estimation
Root, D.H.; Menzie, W.D.; Scott, W.A.
1992-01-01
The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.
Programs for calibration-based Monte Carlo simulation of recharge areas.
Starn, J Jeffrey; Bagtzoglou, Amvrossios C
2012-01-01
One use of groundwater flow models is to simulate contributing recharge areas to wells or springs. Particle tracking can be used to simulate these recharge areas, but in many cases the modeler is not sure how accurate these recharge areas are because parameters such as hydraulic conductivity and recharge have errors associated with them. The scripts described in this article (GEN_LHS and MCDRIVER_LHS) use the Python scripting language to run a Monte Carlo simulation with Latin hypercube sampling where model parameters such as hydraulic conductivity and recharge are randomly varied for a large number of model simulations, and the probability of a particle being in the contributing area of a well is calculated based on the results of multiple simulations. Monte Carlo simulation provides one useful measure of the variability in modeled particles. The Monte Carlo method described here is unique in that it uses parameter sets derived from the optimal parameters, their standard deviations, and their correlation matrix, all of which are calculated during nonlinear regression model calibration. In addition, this method uses a set of acceptance criteria to eliminate unrealistic parameter sets.
Programs for calibration-based Monte Carlo simulation of recharge areas.
Starn, J Jeffrey; Bagtzoglou, Amvrossios C
2012-01-01
One use of groundwater flow models is to simulate contributing recharge areas to wells or springs. Particle tracking can be used to simulate these recharge areas, but in many cases the modeler is not sure how accurate these recharge areas are because parameters such as hydraulic conductivity and recharge have errors associated with them. The scripts described in this article (GEN_LHS and MCDRIVER_LHS) use the Python scripting language to run a Monte Carlo simulation with Latin hypercube sampling where model parameters such as hydraulic conductivity and recharge are randomly varied for a large number of model simulations, and the probability of a particle being in the contributing area of a well is calculated based on the results of multiple simulations. Monte Carlo simulation provides one useful measure of the variability in modeled particles. The Monte Carlo method described here is unique in that it uses parameter sets derived from the optimal parameters, their standard deviations, and their correlation matrix, all of which are calculated during nonlinear regression model calibration. In addition, this method uses a set of acceptance criteria to eliminate unrealistic parameter sets. PMID:21967487
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh; Sun, Mingshan; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca; Abel, Eric
2014-01-01
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 107 − 109 detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh Sun, Mingshan; Abel, Eric; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca
2014-03-15
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 10{sup 7} − 10{sup 9} detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published
Quantum Monte Carlo Simulations of Adulteration Effect on Bond Alternating Spin=1/2 Chain
NASA Astrophysics Data System (ADS)
Zhang, Peng; Xu, Zhaoxin; Ying, Heping; Dai, Jianhui; Crompton, Peter
The S=1/2 Heisenberg chain with bond alternation and randomness of antiferromagnetic (AFM) and ferromagnetic (FM) interactions is investigated by quantum Monte Carlo simulations of loop/cluster algorithm. Our results have shown interesting finite temperature magnetic properties of this model. The relevance of our study to former investigation results is discussed.
Teacher's Corner: Using SAS for Monte Carlo Simulation Research in SEM
ERIC Educational Resources Information Center
Fan, Xitao; Fan, Xiaotao
2005-01-01
This article illustrates the use of the SAS system for Monte Carlo simulation work in structural equation modeling (SEM). Data generation procedures for both multivariate normal and nonnormal conditions are discussed, and relevant SAS codes for implementing these procedures are presented. A hypothetical example is presented in which Monte Carlo…
Monte Carlo simulations of atmospheric spreading functions for space-borne optical sensors
NASA Technical Reports Server (NTRS)
Kiang, R. K.
1982-01-01
A Monte Carlo radiative transfer model is used to simulate the atmospheric spreading effects. The spreading functions for several vertical aerosol profiles are obtained. The dependence of atmospheric conditions and aerosol properties are investigated, and the importance of the effect on MSS and TM measurements are assessed.
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
Monte Carlo simulation of the neutron monitor yield function
NASA Astrophysics Data System (ADS)
Mangeard, P.-S.; Ruffolo, D.; Sáiz, A.; Madlee, S.; Nutaro, T.
2016-08-01
Neutron monitors (NMs) are ground-based detectors that measure variations of the Galactic cosmic ray flux at GV range rigidities. Differences in configuration, electronics, surroundings, and location induce systematic effects on the calculation of the yield functions of NMs worldwide. Different estimates of NM yield functions can differ by a factor of 2 or more. In this work, we present new Monte Carlo simulations to calculate NM yield functions and perform an absolute (not relative) comparison with the count rate of the Princess Sirindhorn Neutron Monitor (PSNM) at Doi Inthanon, Thailand, both for the entire monitor and for individual counter tubes. We model the atmosphere using profiles from the Global Data Assimilation System database and the Naval Research Laboratory Mass Spectrometer, Incoherent Scatter Radar Extended model. Using FLUKA software and the detailed geometry of PSNM, we calculated the PSNM yield functions for protons and alpha particles. An agreement better than 9% was achieved between the PSNM observations and the simulated count rate during the solar minimum of December 2009. The systematic effect from the electronic dead time was studied as a function of primary cosmic ray rigidity at the top of the atmosphere up to 1 TV. We show that the effect is not negligible and can reach 35% at high rigidity for a dead time >1 ms. We analyzed the response function of each counter tube at PSNM using its actual dead time, and we provide normalization coefficients between count rates for various tube configurations in the standard NM64 design that are valid to within ˜1% for such stations worldwide.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
Monte Carlo-based simulation of dynamic jaws tomotherapy
Sterpin, E.; Chen, Y.; Chen, Q.; Lu, W.; Mackie, T. R.; Vynckier, S.
2011-09-15
Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is greater than 30% of the prescription dose (gamma analysis
Monte Carlo Simulations of the Inside Intron Recombination
NASA Astrophysics Data System (ADS)
Cebrat, Stanisław; PȨKALSKI, Andrzej; Scharf, Fabian
Biological genomes are divided into coding and non-coding regions. Introns are non-coding parts within genes, while the remaining non-coding parts are intergenic sequences. To study evolutionary significance of the inside intron recombination we have used two models based on the Monte Carlo method. In our computer simulations we have implemented the internal structure of genes by declaring the probability of recombination between exons. One situation when inside intron recombination is advantageous is recovering functional genes by combining proper exons dispersed in the genetic pool of the population after a long period without selection for the function of the gene. Populations have to pass through the bottleneck, then. These events are rather rare and we have expected that there should be other phenomena giving profits from the inside intron recombination. In fact we have found that inside intron recombination is advantageous only in the case when after recombination, besides the recombinant forms, parental haplotypes are available and selection is set already on gametes.
Monte Carlo simulation of topographic contrast in scanning ion microscope.
Ohya, Kaoru; Ishitani, Tohru
2004-01-01
Topographic contrast of secondary-electron (SE) images in a scanning ion microscope (SIM) using a focused gallium (Ga) ion beam is investigated by Monte Carlo simulation. The SE yield of heavy materials, in particular, due to the impact of 30 keV Ga ions increases much faster than for the impact of electrons at < or =10 keV as a function of the angle of incidence of the primary beam. This indicates the topographic contrast for heavy materials is clearer in a SIM image than in a scanning electron microscope (SEM) image; for light materials both contrasts are similar to each other. Semicircular rods with different radii and steps with large heights and a small wall angle, made of Si and Au, are modeled for comparison with SE images in SEM. Line profiles of the SE intensity and pseudo-images constructed from the profiles reveal some differences of the topographic contrast between SIM and SEM. We discuss not only the incident-angle effect on the contrast, but also the effects of re-entrances of primary particles and SEs to the neighboring surface, the effect of a sharp edge on the sample surface, and the effects of pattern size and beam size.
Three-dimensional Monte Carlo simulation of gamma-ray scattering and production in the atmosphere
NASA Technical Reports Server (NTRS)
Morris, Daniel J.
1989-01-01
Results are reported from Monte Carlo numerical simulations of atmospheric gamma-ray scattering and production. The basic physical principles involved in the construction of the models are reviewed, and results are presented in extensive graphs for low-energy gamma rays with the spectra of gamma-ray bursts, solar flares, the Crab pulsar, and 511-keV line radiation. It is shown that the model accurately reproduces the characteristics of atmospheric albedo radiation, including details of the angular distribution. The potential applicability of the Monte Carlo technique to studies of the near-earth radiation environment is indicated.
Polarimetric lidar returns in the ocean: a Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Adams, James T.; Kattawar, George W.
1997-02-01
Anisotropy in the polarization state of backscattered light from a polarized beam incident on suspensions in water analogous to hydrosols in seawater has been observed experimentally. Viewed through an orientated polarizer, characteristic patterns in the backscattered light are produced. We wish to present the results of Monte Carlo simulations of these physical effects demonstrating excellent agreement with published and unpublished experimental observations. These simulations show that the effects observed are produced by the incoherent scattering of light in the range of volume fractions reported and that this treatment should a low predictions to be made about the application of this technique to ocean probing lidar.
NASA Astrophysics Data System (ADS)
Belinato, W.; Santos, W. S.; Paschoal, C. M. M.; Souza, D. N.
2015-06-01
The combination of positron emission tomography (PET) and computed tomography (CT) has been extensively used in oncology for diagnosis and staging of tumors, radiotherapy planning and follow-up of patients with cancer, as well as in cardiology and neurology. This study determines by the Monte Carlo method the internal organ dose deposition for computational phantoms created by multidetector CT (MDCT) beams of two PET/CT devices operating with different parameters. The different MDCT beam parameters were largely related to the total filtration that provides a beam energetic change inside the gantry. This parameter was determined experimentally with the Accu-Gold Radcal measurement system. The experimental values of the total filtration were included in the simulations of two MCNPX code scenarios. The absorbed organ doses obtained in MASH and FASH phantoms indicate that bowtie filter geometry and the energy of the X-ray beam have significant influence on the results, although this influence can be compensated by adjusting other variables such as the tube current-time product (mAs) and pitch during PET/CT procedures.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders
Viveros-Méndez, P. X. Aranda-Espinoza, S.
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e{sup 2}/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions L{sub x} ≈ L{sub y} and L{sub z} = 5L{sub x}, where L{sub x}, L{sub y}, and L{sub z} are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
NASA Astrophysics Data System (ADS)
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
The MCLIB library: Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-09-01
Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.
Three-dimensional electron microscopy simulation with the CASINO Monte Carlo software.
Demers, Hendrix; Poirier-Demers, Nicolas; Couture, Alexandre Réal; Joly, Dany; Guilmain, Marc; de Jonge, Niels; Drouin, Dominique
2011-01-01
Monte Carlo softwares are widely used to understand the capabilities of electron microscopes. To study more realistic applications with complex samples, 3D Monte Carlo softwares are needed. In this article, the development of the 3D version of CASINO is presented. The software feature a graphical user interface, an efficient (in relation to simulation time and memory use) 3D simulation model, accurate physic models for electron microscopy applications, and it is available freely to the scientific community at this website: www.gel.usherbrooke.ca/casino/index.html. It can be used to model backscattered, secondary, and transmitted electron signals as well as absorbed energy. The software features like scan points and shot noise allow the simulation and study of realistic experimental conditions. This software has an improved energy range for scanning electron microscopy and scanning transmission electron microscopy applications.
Accelerating particle-in-cell simulations using multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2015-11-01
Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.
Phase diagrams of scalemic mixtures: A Monte Carlo simulation study
NASA Astrophysics Data System (ADS)
Vlot, Margot J.; van Miltenburg, J. Cornelis; Oonk, Harry A. J.; van der Eerden, Jan P.
1997-12-01
In this paper, a simplified model was used to describe the interactions between the enantiomers in a scalemic mixture. Monte Carlo simulations were performed to determine several thermodynamic properties as a function of temperature and mole fraction of solid, liquid, and gas phase. Phase diagrams were constructed using a macroscopic thermodynamic program, PROPHASE. The model consists of spherical D and L molecules interacting via modified Lennard-Jones potentials (σDD=σLL, ɛDD=ɛLL, ɛDL=eɛDD, and σDL=sσDD.) The two heterochiral interaction parameters, e and s, were found to be sufficient to produce all types of phase diagrams that have been found for these systems experimentally. Conglomerates were found when the heterochiral interaction strength was smaller than the homochiral value, e<1. A different heterochiral interaction distance, s≠1, led to racemic compounds, with an ordered distribution of D and L molecules. The CsCl-structured compound was found to be stable for short DL interactions, s<1 (e=1), with an enantiotropic transition to a solid solution for s=0.96. Longer heterochiral distances, s>1, result in the formation of layered fcc compounds. The liquid regions in the phase diagram become larger for s≠1, caused by a strong decrease of the melting point for s<1 and s>1, in combination with only a small effect on the boiling point for s<1, and even an increase of the boiling point for s>1. Segregation into two different solid solutions, one with low mole fraction and the other one close to x=0.25, was obtained for these mixtures as well.
Monte Carlo simulation of light propagation in the adult brain
NASA Astrophysics Data System (ADS)
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter
2004-06-01
When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.
The proton therapy nozzles at Samsung Medical Center: A Monte Carlo simulation study using TOPAS
NASA Astrophysics Data System (ADS)
Chung, Kwangzoo; Kim, Jinsung; Kim, Dae-Hyun; Ahn, Sunghwan; Han, Youngyih
2015-07-01
To expedite the commissioning process of the proton therapy system at Samsung Medical Center (SMC), we have developed a Monte Carlo simulation model of the proton therapy nozzles by using TOol for PArticle Simulation (TOPAS). At SMC proton therapy center, we have two gantry rooms with different types of nozzles: a multi-purpose nozzle and a dedicated scanning nozzle. Each nozzle has been modeled in detail following the geometry information provided by the manufacturer, Sumitomo Heavy Industries, Ltd. For this purpose, the novel features of TOPAS, such as the time feature or the ridge filter class, have been used, and the appropriate physics models for proton nozzle simulation have been defined. Dosimetric properties, like percent depth dose curve, spreadout Bragg peak (SOBP), and beam spot size, have been simulated and verified against measured beam data. Beyond the Monte Carlo nozzle modeling, we have developed an interface between TOPAS and the treatment planning system (TPS), RayStation. An exported radiotherapy (RT) plan from the TPS is interpreted by using an interface and is then translated into the TOPAS input text. The developed Monte Carlo nozzle model can be used to estimate the non-beam performance, such as the neutron background, of the nozzles. Furthermore, the nozzle model can be used to study the mechanical optimization of the design of the nozzle.
Quantifying the Effect of Undersampling in Monte Carlo Simulations Using SCALE
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This study explores the effect of undersampling in Monte Carlo calculations on tally estimates and tally variance estimates for burnup credit applications. Steady-state Monte Carlo simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity and the impact of undersampling on eigenvalue and flux estimates was examined. Using an inadequate number of particle histories in each generation was found to produce an approximately 100 pcm bias in the eigenvalue estimates, and biases that exceeded 10% in fuel pin flux estimates.
Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations
NASA Astrophysics Data System (ADS)
Malladi, Mayank
Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are
Accelerating Markov chain Monte Carlo simulation through sequential updating and parallel computing
NASA Astrophysics Data System (ADS)
Ren, Ruichao
Monte Carlo simulation is a statistical sampling method used in studies of physical systems with properties that cannot be easily obtained analytically. The phase behavior of the Restricted Primitive Model of electrolyte solutions on the simple cubic lattice is studied using grand canonical Monte Carlo simulations and finite-size scaling techniques. The transition between disordered and ordered, NaCl-like structures is continuous, second-order at high temperatures and discrete, first-order at low temperatures. The line of continuous transitions meets the line of first-order transitions at a tricritical point. A new algorithm-Random Skipping Sequential (RSS) Monte Carl---is proposed, justified and shown analytically to have better mobility over the phase space than the conventional Metropolis algorithm satisfying strict detailed balance. The new algorithm employs sequential updating, and yields greatly enhanced sampling statistics than the Metropolis algorithm with random updating. A parallel version of Markov chain theory is introduced and applied in accelerating Monte Carlo simulation via cluster computing. It is shown that sequential updating is the key to reduce the inter-processor communication or synchronization which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time by the new method for systems of large and moderate sizes.
Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C
2010-12-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Monte Carlo Simulation of New UCN Source at LENS
NASA Astrophysics Data System (ADS)
McChesney, Patrick
2006-10-01
My research has focused on a Monte Carlo study of a new ultracold neutron (UCN) source under development at the Low Energy Neutron Source (LENS) at the Indiana University Cyclotron Facility. UCNs are neutrons with energies below 3 x10-7 eV. They can be used to make extremely accurate measurements of the electric dipole moment of the neutron to test time reversal symmetry. LENS has successfully produced cold neutrons and we are designing an extension to study UCN production. I have modeled a UCN module which will test a novel technique involving magnon interactions in solid oxygen to produce UCNs. Our module slows down the fast neutrons produced by a (p,n) reaction in Be target with a polyethylene cold neutron moderator and directs the resulting cold neutrons into a half liter piece of solid oxygen with water and polyethylene as reflectors. Cold neutrons enter into solid oxygen and are down-scattered to the UCN energy range. These UCNs are then directed upwards toward a storage bottle at a higher elevation, being further slowed down by gravity. I have tested various design configurations trying to maximize the cold neutron flux through the solid oxygen component while minimizing the heat load in the cryogenic system. The simulations predict a cold neutron flux of 2 x10^10 n/cm^2/s with a heat load around 1 W from 2.5 mA of 13 MeV protons. My findings are being used as a guideline to design our module.
Lanczos and Recursion Techniques for Multiscale Kinetic Monte Carlo Simulations
Rudd, R E; Mason, D R; Sutton, A P
2006-03-13
We review an approach to the simulation of the class of microstructural and morphological evolution involving both relatively short-ranged chemical and interfacial interactions and long-ranged elastic interactions. The calculation of the anharmonic elastic energy is facilitated with Lanczos recursion. The elastic energy changes affect the rate of vacancy hopping, and hence the rate of microstructural evolution due to vacancy mediated diffusion. The elastically informed hopping rates are used to construct the event catalog for kinetic Monte Carlo simulation. The simulation is accelerated using a second order residence time algorithm. The effect of elasticity on the microstructural development has been assessed. This article is related to a talk given in honor of David Pettifor at the DGP60 Workshop in Oxford.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
NASA Astrophysics Data System (ADS)
Gu, J.; Bednarz, B.; Caracappa, P. F.; Xu, X. G.
2009-05-01
The latest multiple-detector technologies have further increased the popularity of x-ray CT as a diagnostic imaging modality. There is a continuing need to assess the potential radiation risk associated with such rapidly evolving multi-detector CT (MDCT) modalities and scanning protocols. This need can be met by the use of CT source models that are integrated with patient computational phantoms for organ dose calculations. Based on this purpose, this work developed and validated an MDCT scanner using the Monte Carlo method, and meanwhile the pregnant patient phantoms were integrated into the MDCT scanner model for assessment of the dose to the fetus as well as doses to the organs or tissues of the pregnant patient phantom. A Monte Carlo code, MCNPX, was used to simulate the x-ray source including the energy spectrum, filter and scan trajectory. Detailed CT scanner components were specified using an iterative trial-and-error procedure for a GE LightSpeed CT scanner. The scanner model was validated by comparing simulated results against measured CTDI values and dose profiles reported in the literature. The source movement along the helical trajectory was simulated using the pitch of 0.9375 and 1.375, respectively. The validated scanner model was then integrated with phantoms of a pregnant patient in three different gestational periods to calculate organ doses. It was found that the dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. The paper also discusses how these fetal dose values can be used to evaluate imaging procedures and to assess risk using recommendations of the report from AAPM Task Group 36. This work demonstrates the ability of modeling and validating an MDCT scanner by the Monte Carlo method, as well as
Monte Carlo simulations and dosimetric studies of an irradiation facility
NASA Astrophysics Data System (ADS)
Belchior, A.; Botelho, M. L.; Vaz, P.
2007-09-01
There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.
Tennant, Marc; Kruger, Estie
2013-02-01
This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT<>0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated.
Monte Carlo simulation experiments on box-type radon dosimeter
NASA Astrophysics Data System (ADS)
Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-11-01
Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (ηint) and alpha hit efficiency (ηhit). The ηint depends upon only on the dimensions of the dosimeter and ηhit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the
Quantum Monte Carlo Simulation of Overpressurized Liquid {sup 4}He
Vranjes, L.; Boronat, J.; Casulleras, J.; Cazorla, C.
2005-09-30
A diffusion Monte Carlo simulation of superfluid {sup 4}He at zero temperature and pressures up to 275 bar is presented. Increasing the pressure beyond freezing ({approx}25 bar), the liquid enters the overpressurized phase in a metastable state. In this regime, we report results of the equation of state and the pressure dependence of the static structure factor, the condensate fraction, and the excited-state energy corresponding to the roton. Along this large pressure range, both the condensate fraction and the roton energy decrease but do not become zero. The roton energies obtained are compared with recent experimental data in the overpressurized regime.
Monte Carlo simulation of retinal light absorption by infants.
Guo, Ya; Tan, Jinglu
2015-02-01
Retinal damage can occur in normal ambient lighting conditions. Infants are particularly vulnerable to retinal damage, and thousands of preterm infants sustain vision damage each year. The size of the ocular fundus affects retinal light absorption, but there is a lack of understanding of this effect for infants. In this work, retinal light absorption is simulated for different ocular fundus sizes, wavelengths, and pigment concentrations by using the Monte Carlo method. The results indicate that the neural retina light absorption per volume for infants can be two or more times that for adults. PMID:26366599
Application of Direct Simulation Monte Carlo to Satellite Contamination Studies
NASA Technical Reports Server (NTRS)
Rault, Didier F. G.; Woronwicz, Michael S.
1995-01-01
A novel method is presented to estimate contaminant levels around spacecraft and satellites of arbitrarily complex geometry. The method uses a three-dimensional direct simulation Monte Carlo algorithm to characterize the contaminant cloud surrounding the space platform, and a computer-assisted design preprocessor to define the space-platform geometry. The method is applied to the Upper Atmosphere Research Satellite to estimate the contaminant flux incident on the optics of the halogen occultation experiment (HALOE) telescope. Results are presented in terms of contaminant cloud structure, molecular velocity distribution at HALOE aperture, and code performance.
Off-Lattice Monte Carlo Simulation of Supramolecular Polymer Architectures
NASA Astrophysics Data System (ADS)
Amuasi, H. E.; Storm, C.
2010-12-01
We introduce an efficient, scalable Monte Carlo algorithm to simulate cross-linked architectures of freely jointed and discrete wormlike chains. Bond movement is based on the discrete tractrix construction, which effects conformational changes that exactly preserve fixed-length constraints of all bonds. The algorithm reproduces known end-to-end distance distributions for simple, analytically tractable systems of cross-linked stiff and freely jointed polymers flawlessly, and is used to determine the effective persistence length of short bundles of semiflexible wormlike chains, cross-linked to each other. It reveals a possible regulatory mechanism in bundled networks: the effective persistence of bundles is controlled by the linker density.
Monte Carlo simulation of retinal light absorption by infants.
Guo, Ya; Tan, Jinglu
2015-02-01
Retinal damage can occur in normal ambient lighting conditions. Infants are particularly vulnerable to retinal damage, and thousands of preterm infants sustain vision damage each year. The size of the ocular fundus affects retinal light absorption, but there is a lack of understanding of this effect for infants. In this work, retinal light absorption is simulated for different ocular fundus sizes, wavelengths, and pigment concentrations by using the Monte Carlo method. The results indicate that the neural retina light absorption per volume for infants can be two or more times that for adults.
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Rogiers, Bart; Vrugt, Jasper A.; Mallants, Dirk; Jacques, Diederik
2013-05-01
This study reports on two strategies for accelerating posterior inference of a highly parameterized and CPU-demanding groundwater flow model. Our method builds on previous stochastic collocation approaches, e.g., Marzouk and Xiu (2009) and Marzouk and Najm (2009), and uses generalized polynomial chaos (gPC) theory and dimensionality reduction to emulate the output of a large-scale groundwater flow model. The resulting surrogate model is CPU efficient and serves to explore the posterior distribution at a much lower computational cost using two-stage MCMC simulation. The case study reported in this paper demonstrates a two to five times speed-up in sampling efficiency.
Risk analysis and Monte Carlo simulation applied to the generation of drilling AFE estimates
Peterson, S.K.; Murtha, J.A.; Schneider, F.F.
1995-06-01
This paper presents a method for developing an authorization-for-expenditure (AFE)-generating model and illustrates the technique with a specific offshore field development case study. The model combines Monte Carlo simulation and statistical analysis of historical drilling data to generate more accurate, risked, AFE estimates. In addition to the general method, two examples of making AFE time estimates for North Sea wells with the presented techniques are given.
NASA Astrophysics Data System (ADS)
Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi
1994-01-01
The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon
Lattice Monte Carlo simulation of Galilei variant anomalous diffusion
Guo, Gang; Bittig, Arne; Uhrmacher, Adelinde
2015-05-01
The observation of an increasing number of anomalous diffusion phenomena motivates the study to reveal the actual reason for such stochastic processes. When it is difficult to get analytical solutions or necessary to track the trajectory of particles, lattice Monte Carlo (LMC) simulation has been shown to be particularly useful. To develop such an LMC simulation algorithm for the Galilei variant anomalous diffusion, we derive explicit solutions for the conditional and unconditional first passage time (FPT) distributions with double absorbing barriers. According to the theory of random walks on lattices and the FPT distributions, we propose an LMC simulation algorithm and prove that such LMC simulation can reproduce both the mean and the mean square displacement exactly in the long-time limit. However, the error introduced in the second moment of the displacement diverges according to a power law as the simulation time progresses. We give an explicit criterion for choosing a small enough lattice step to limit the error within the specified tolerance. We further validate the LMC simulation algorithm and confirm the theoretical error analysis through numerical simulations. The numerical results agree with our theoretical predictions very well.
Bieda, Bogusław
2013-01-01
The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. PMID:23194922
Estimation of beryllium ground state energy by Monte Carlo simulation
Kabir, K. M. Ariful; Halder, Amal
2015-05-15
Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.
Monte Carlo simulation of a clinical linear accelerator.
Lin, S Y; Chu, T C; Lin, J P
2001-12-01
The effects of the physical parameters of an electron beam from a Siemens PRIMUS clinical linear accelerator (linac) on the dose distribution in water were investigated by Monte Carlo simulation. The EGS4 user code, OMEGA/BEAM, was used in this study. Various incident electron beams, for example, with different energies, spot sizes and distances from the point source, were simulated using the detailed linac head structure in the 6 MV photon mode. Approximately 10 million particles were collected in the scored plane, which was set under the reticle to form the so-called phase space file. The phase space file served as a source for simulating the dose distribution in water using DOSXYZ. Dose profiles at Dmax (1.5 cm) and PDD curves were calculated following simulating about 1 billion histories for dose profiles and 500 million histories for percent depth dose (PDD) curves in a 30 x 30 x 30 cm3 water phantom. The simulation results were compared with the data measured by a CEA film and an ion chamber. The results show that the dose profiles are influenced by the energy and the spot size, while PDD curves are primarily influenced by the energy of the incident beam. The effect of the distance from the point source on the dose profile is not significant and is recommended to be set at infinity. We also recommend adjusting the beam energy by using PDD curves and, then, adjusting the spot size by using the dose profile to maintain the consistency of the Monte Carlo results and measured data. PMID:11761097
Monte Carlo simulations of tungsten redeposition at the divertor target
NASA Astrophysics Data System (ADS)
Chankin, A. V.; Coster, D. P.; Dux, R.
2014-02-01
Recent modeling of controlled edge-localized modes (ELMs) in ITER with tungsten (W) divertor target plates by the SOLPS code package predicted high electron temperatures (>100 eV) and densities (>1 × 1021 m-3) at the outer target. Under certain scenarios W sputtered during ELMs can penetrate into the core in quantities large enough to cause deterioration of the discharge performance, as was shown by coupled SOLPS5.0/STRAHL/ASTRA runs. The net sputtering yield, however, was expected to be dramatically reduced by the ‘prompt redeposition’ during the first Larmor gyration of W1+ (Fussman et al 1995 Proc. 15th Int. Conf. on Plasma Physics and Controlled Nuclear Fusion Research (Vienna: IAEA) vol 2, p 143). Under high ne/Te conditions at the target during ITER ELMs, prompt redeposition would reduce W sputtering by factor p-2 ˜ 104 (with p ≡ τionωgyro ˜ 0.01). However, this relation does not include the effects of multiple ionizations of sputtered W atoms and the electric field in the magnetic pre-sheath (MPS, or ‘Chodura sheath’) and Debye sheath (DS). Monte Carlo simulations of W redeposition with the inclusion of these effects are described in the paper. It is shown that for p ≪ 1, the inclusion of multiple W ionizations and the electric field in the MPS and DS changes the physics of W redeposition from geometrical effects of circular gyro-orbits hitting the target surface, to mainly energy considerations; the key effect is the electric potential barrier for ions trying to escape into the main plasma. The overwhelming majority of ions are drawn back to the target by a strong attracting electric field. It is also shown that the possibility of a W self-sputtering avalanche by ions circulating in the MPS can be ruled out due to the smallness of the sputtered W neutral energies, which means that they do not penetrate very far into the MPS before ionizing; thus the W ions do not gain a large kinetic energy as they are accelerated back to the surface by the
Monte Carlo field-theoretic simulations of a homopolymer blend
NASA Astrophysics Data System (ADS)
Spencer, Russell; Matsen, Mark
Fluctuation corrections to the macrophase segregation transition (MST) in a symmetric homopolymer blend are examined using Monte Carlo field-theoretic simulations (MC-FTS). This technique involves treating interactions between unlike monomers using standard Monte-Carlo techniques, while enforcing incompressibility as is done in mean-field theory. When using MC-FTS, we need to account for a UV divergence. This is done by renormalizing the Flory-Huggins interaction parameter to incorporate the divergent part of the Hamiltonian. We compare different ways of calculating this effective interaction parameter. Near the MST, the length scale of compositional fluctuations becomes large, however, the high computational requirements of MC-FTS restrict us to small system sizes. We account for these finite size effects using the method of Binder cumulants, allowing us to locate the MST with high precision. We examine fluctuation corrections to the mean field MST, χN = 2 , as they vary with the invariant degree of polymerization, N =ρ2a6 N . These results are compared with particle-based simulations as well as analytical calculations using the renormalized one loop theory. This research was funded by the Center for Sustainable Polymers.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Monte Carlo simulation of high-field transport equations
Abdolsalami, F.
1989-01-01
The author has studied the importance of the intracollisional field effect in the quantum transport equation derived by Khan, Davies and Wilkins (Phys. Rev. B36, 2578(1987)) via Monte Carlo simulations. This transport equation is identical to the integral form of the Boltzmann transport equation except that the scattering-in rates contain the auxiliary function of energy width {radical}{vert bar}{alpha}{vert bar} instead of the sharp delta function of the semiclassical theory where {alpha} = {pi}{h bar}{sup 2} e/m* E {center dot} q. Here, E is the electric field, q is the phonon wave vector of m* is the effective mass. The transport equation studied corresponds to a single parabolic band of infinite width and is valid in the field dominated limit, i.e. {radical}{vert bar}{alpha}{vert bar} {much gt} h/{tau}{sub sc}, where {tau}{sup {minus}1} is the electron scattering-out rate. In his simulation, he takes the single parabolic band to be the central valley of GaAs with transition to higher valleys shut off. Electrons are assumed to scatter with polar optic and acoustic phonons with the scattering parameters chosen to simulate GaAs. The loss of intervalley scattering mechanism for high electric fields is compensated for by increasing each of the four scattering rates relative to the real values in GaAs by a factor {gamma}. The transport equation studied contains the auxilliary function which is not positive definite. Therefore, it can not represent a probability of scattering in a Monte Carlo simulation. The question whether or not intracollisional field effect is important can be resolved by replacing the nonpositive definite auxilliary function by a test positive definite function of width {radical}{vert bar}{alpha}{vert bar} and comparing the results of the Monte Carlo simulation of this quantum transport equation with those of the Boltzmann transport equation. If the results are identical, the intracollisional field effect is not important.
Modeling multileaf collimators with the PEREGRINE Monte Carlo
Albright, N; Fujino, D H; J Wieczorek
1999-03-01
Multileaf collimators (MLCs) are becoming increasingly important for beam shaping and intensity modulated radiation therapy (IMRT). Their unique design can introduce subtle effects in the patient/phantom dose distribution. The PEREGRINE 3D Monte Carlo dose calculation system predicts dose by implementing a full Monte Carlo simulation of the beam delivery and patient/phantom system. As such, it provides a powerful tool to explore dosimetric effects of MLC designs. We have installed a new MLC modeling package into PEREGRINE. This package simulates full photon and electron transport in the MLC and includes tongue-and-groove construction and curved or straight leaf ends in the leaf shape geometry. We tested the accuracy of the PEREGRINE MLC package by comparing PEREGRINE predictions with ion chamber, diode, and photographic film measurements taken with a Varian 2 1 OOC using 6 and 18 MV photon beams. Profile and depth dose measurements were made for the MLC configured into annulus and comb patterns. In all cases, PEREGRINE modeled these measurements to within experimental uncertainties. Our results demonstrate PEREGRINE's accuracy for modeling MLC characteristics, and suggest that PEREGRINE would be an ideal tool to explore issues such as (1) underdosing between leaves due to the ''tongue-and-groove'' effect when dose from multiple MLC patterns are added together, (2) radiation leakage in the bullnose region, and (3) dose under a single leaf due to scatter in the patient.
Monte Carlo simulations of kagome lattices with magnetic dipolar interactions
NASA Astrophysics Data System (ADS)
Plumer, Martin; Holden, Mark; Way, Andrew; Saika-Voivod, Ivan; Southern, Byron
Monte Carlo simulations of classical spins on the two-dimensional kagome lattice with only dipolar interactions are presented. In addition to revealing the sixfold-degenerate ground state, the nature of the finite-temperature phase transition to long-range magnetic order is discussed. Low-temperature states consisting of mixtures of degenerate ground-state configurations separated by domain walls can be explained as a result of competing exchange-like and shape-anisotropy-like terms in the dipolar coupling. Fluctuations between pairs of degenerate spin configurations are found to persist well into the ordered state as the temperature is lowered until locking in to a low-energy state. Results suggest that the system undergoes a continuous phase transition at T ~ 0 . 43 in agreement with previous MC simulations but the nature of the ordering process differs. Preliminary results which extend this analysis to the 3D fcc ABC-stacked kagome systems will be presented.
Direct Simulation Monte Carlo (DSMC) on the Connection Machine
Wong, B.C.; Long, L.N. )
1992-01-01
The massively parallel computer Connection Machine is utilized to map an improved version of the direct simulation Monte Carlo (DSMC) method for solving flows with the Boltzmann equation. The kinetic theory is required for analyzing hypersonic aerospace applications, and the features and capabilities of the DSMC particle-simulation technique are discussed. The DSMC is shown to be inherently massively parallel and data parallel, and the algorithm is based on molecule movements, cross-referencing their locations, locating collisions within cells, and sampling macroscopic quantities in each cell. The serial DSMC code is compared to the present parallel DSMC code, and timing results show that the speedup of the parallel version is approximately linear. The correct physics can be resolved from the results of the complete DSMC method implemented on the connection machine using the data-parallel approach. 41 refs.
Direct simulation Monte Carlo method with a focal mechanism algorithm
NASA Astrophysics Data System (ADS)
Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung
2015-01-01
To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.
Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model
Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.
2011-10-01
We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.
NASA Astrophysics Data System (ADS)
Kurinsky, Noah; Sajina, Anna
2014-06-01
We present a novel simulation and fitting program which employs MCMC to constrain the spectral energy distribution makeup and luminosity function evolution required to produce a given mutli-wavelength survey. This tool employs a multidimensional color-color diagnostic to determine goodness of fit, and simulates observational sources of error such as flux-limits and instrumental noise. Our goals in designing this tool were to a) use it to study Infrared surveys and test SED template models, and b) create it in such a way as to make it usable in any electromagnetic regime for any class of sources to which any luminosity functional form can be prescribed.I will discuss our specific use of the program to characterize a survey from the Herschel SPIRE HerMES catalog, including implications for our luminosity function and SED models. I will also briefly discuss the ways we envision using it for simulation and application to other surveys, and I will demonstrate the degree to which its reusability can serve to enrich a wide range of analyses.
Biofilm growth: a lattice Monte Carlo model
NASA Astrophysics Data System (ADS)
Tao, Yuguo; Slater, Gary
2011-03-01
Biofilms are complex colonies of bacteria that grow in contact with a wall, often in the presence of a flow. In the current work, biofilm growth is investigated using a new two-dimensional lattice Monte Carlo algorithm based on the Bond-Fluctuation Algorithm (BFA). One of the distinguishing characteristics of biofilms, the synthesis and physical properties of the extracellular polymeric substance (EPS) in which the cells are embedded, is explicitly taken into account. Cells are modelled as autonomous closed loops with well-defined mechanical and thermodynamic properties, while the EPS is modelled as flexible polymeric chains. This BFA model allows us to add biologically relevant features such as: the uptake of nutrients; cell growth, division and death; the production of EPS; cell maintenance and hibernation; the generation of waste and the impact of toxic molecules; cell mutation and evolution; cell motility. By tuning the structural, interactional and morphologic parameters of the model, the cell shapes as well as the growth and maturation of various types of biofilm colonies can be controlled.
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Direct simulation Monte Carlo and Navier-Stokes simulations of blunt body wake flows
NASA Astrophysics Data System (ADS)
Moss, James N.; Mitcheltree, Robert A.; Dogra, Virendra K.; Wilmoth, Richard G.
1994-07-01
Numerical results obtained with direct simulation Monte Carlo and Navier-Stokes methods are presented for a Mach-20 nitrogen flow about a 70-deg blunted cone. The flow conditions simulated are those that can be obtained in existing low-density hypersonic wind tunnels. Three sets of flow conditions are considered with freestream Knudsen numbers ranging from 0.03 to 0.001. The focus is on the wake structure: how the wake structure changes as a function of rare faction, what the afterbody levels of heating are, and to what limits the continuum models are realistic as rarefunction in the wake is progressively increased. Calculations are made with and without an afterbody sting. Results for the afterbody sting are emphasized in anticipation of an experimental study for the current flow conditions and model configuration. The Navier-Stokes calculations were made with and without slip boundary conditions. Comparisons of the results obtained with the two simulation methodologies are made for both flowfield structure and surface quantities.
Direct simulation Monte Carlo and Navier-Stokes simulations of blunt body wake flows
NASA Technical Reports Server (NTRS)
Moss, James N.; Mitcheltree, Robert A.; Dogra, Virendra K.; Wilmoth, Richard G.
1994-01-01
Numerical results obtained with direct simulation Monte Carlo and Navier-Stokes methods are presented for a Mach-20 nitrogen flow about a 70-deg blunted cone. The flow conditions simulated are those that can be obtained in existing low-density hypersonic wind tunnels. Three sets of flow conditions are considered with freestream Knudsen numbers ranging from 0.03 to 0.001. The focus is on the wake structure: how the wake structure changes as a function of rare faction, what the afterbody levels of heating are, and to what limits the continuum models are realistic as rarefunction in the wake is progressively increased. Calculations are made with and without an afterbody sting. Results for the afterbody sting are emphasized in anticipation of an experimental study for the current flow conditions and model configuration. The Navier-Stokes calculations were made with and without slip boundary conditions. Comparisons of the results obtained with the two simulation methodologies are made for both flowfield structure and surface quantities.
Raga: Monte Carlo simulations of gravitational dynamics of non-spherical stellar systems
NASA Astrophysics Data System (ADS)
Vasiliev, Eugene
2014-11-01
Raga (Relaxation in Any Geometry) is a Monte Carlo simulation method for gravitational dynamics of non-spherical stellar systems. It is based on the SMILE software (ascl:1308.001) for orbit analysis. It can simulate stellar systems with a much smaller number of particles N than the number of stars in the actual system, represent an arbitrary non-spherical potential with a basis-set or spline spherical-harmonic expansion with the coefficients of expansion computed from particle trajectories, and compute particle trajectories independently and in parallel using a high-accuracy adaptive-timestep integrator. Raga can also model two-body relaxation by local (position-dependent) velocity diffusion coefficients (as in Spitzer's Monte Carlo formulation) and adjust the magnitude of relaxation to the actual number of stars in the target system, and model the effect of a central massive black hole.
NASA Astrophysics Data System (ADS)
Hilburn, Guy Louis
Results from several studies are presented which detail explorations of the physical and spectral properties of low luminosity active galactic nuclei. An initial Sagittarius A* general relativistic magnetohydrodynamic simulation and Monte Carlo radiation transport model suggests accretion rate changes as the dominant flaring method. A similar study on M87 introduces new methods to the Monte Carlo model for increased consistency in highly energetic sources. Again, accretion rate variation seems most appropriate to explain spectral transients. To more closely resolve the methods of particle energization in active galactic nuclei accretion disks, a series of localized shearing box simulations explores the effect of numerical resolution on the development of current sheets. A particular focus on numerically describing converged current sheet formation will provide new methods for consideration of turbulence in accretion disks.
Deposition at glancing angle, surface roughness, and protein adsorption: Monte Carlo simulations.
Zhdanov, Vladimir P; Rechendorff, Kristian; Hovgaard, Mads B; Besenbacher, Flemming
2008-06-19
To generate rough surfaces in Monte Carlo simulations, we use the 2 + 1 solid-on-solid model of deposition with rapid transient diffusion of newly arrived atoms supplied at glancing angle. The surfaces generated are employed to scrutinize the effect of surface roughness on adsorption of globular and anisotropic rodlike proteins. The obtained results are compared with the available experimental data for Ta deposition at glancing angle and for the bovine serum albumin and fibrinogen uptake on the corresponding Ta films.
NASA Astrophysics Data System (ADS)
De Napoli, M.; Romano, F.; D'Urso, D.; Licciardello, T.; Agodi, C.; Candiano, G.; Cappuzzello, F.; Cirrone, G. A. P.; Cuttone, G.; Musumarra, A.; Pandola, L.; Scuderi, V.
2014-12-01
When a carbon beam interacts with human tissues, many secondary fragments are produced into the tumor region and the surrounding healthy tissues. Therefore, in hadrontherapy precise dose calculations require Monte Carlo tools equipped with complex nuclear reaction models. To get realistic predictions, however, simulation codes must be validated against experimental results; the wider the dataset is, the more the models are finely tuned. Since no fragmentation data for tissue-equivalent materials at Fermi energies are available in literature, we measured secondary fragments produced by the interaction of a 55.6 MeV u-1 12C beam with thick muscle and cortical bone targets. Three reaction models used by the Geant4 Monte Carlo code, the Binary Light Ions Cascade, the Quantum Molecular Dynamic and the Liege Intranuclear Cascade, have been benchmarked against the collected data. In this work we present the experimental results and we discuss the predictive power of the above mentioned models.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
Petroccia, H; Bolch, W; Li, Z; Mendenhall, N
2015-06-15
Purpose: Mean organ doses from structures located in field and outside of field boundaries during radiotherapy treatment must be considered when looking at secondary effects. Treatment planning in patients with 40 years of follow-up does not include 3-D treatment planning images and did not estimate dose to structures out of the direct field. Therefore, it is of interest to correlate actual clinical events with doses received. Methods: Accurate models of radiotherapy machines combined with whole body computational phantoms using Monte Carlo methods allow for dose reconstructions intended for studies on late radiation effects. The Theratron-780 radiotherapy unit and anatomically realistic hybrid computational phantoms are modeled in the Monte Carlo radiation transport code MCNPX. The major components of the machine including the source capsule, lead in the unit-head, collimators (fixed/adjustable), and trimmer bars are simulated. The MCNPX transport code is used to compare calculated values in a water phantom with published data from BJR suppl. 25 for in-field doses and experimental data from AAPM Task Group No. 36 for out-of-field doses. Next, the validated cobalt-60 teletherapy model is combined with the UF/NCI Family of Reference Hybrid Computational Phantoms as a methodology for estimating organ doses. Results: The model of Theratron-780 has shown to be agree with percentage depth dose data within approximately 1% and for out of field doses the machine is shown to agree within 8.8%. Organ doses are reported for reference hybrid phantoms. Conclusion: Combining the UF/NCI Family of Reference Hybrid Computational Phantoms along with a validated model of the Theratron-780 allows for organ dose estimates of both in-field and out-of-field organs. By changing field size, position, and adding patient-specific blocking more complicated treatment set-ups can be recreated for patients treated historically, particularly those who lack both 2D/3D image sets.
Direct simulation Monte Carlo simulations of aerodynamic effects on sounding rockets
NASA Astrophysics Data System (ADS)
Allen, Jeffrey B.
Over the past several decades, atomic oxygen (AO) measurements taken from sounding rocket sensor payloads in the Mesosphere and lower Thermosphere (MALT) have shown marked variability. AO data retrieved from the second Coupling of Dynamics and Aurora (CODA II) experiment has shown that the data is highly dependent upon rocket orientation. Many sounding rocket payloads, including CODA II, contain AO sensors that are located in close proximity to the payload surface and are thus significantly influenced by compressible, aerodynamic effects. In addition, other external effects such as Doppler shift and the contamination of sensor optics from desorption may play a significant role. These effects serve to inhibit the AO sensors' ability to accurately determine undisturbed atmospheric conditions. The present research numerically models the influence caused by these effects (primarily aerodynamic), using the direct simulation Monte Carlo (DSMC) method. In particular, a new parallel, steady/unsteady, three-dimensional, DSMC solver, foamDSMC, is developed. The method of development and validation of this new solver is presented with comparisons made with available commercial solvers. The foamDSMC solver is then used to simulate the steady and unsteady flow-field of CODA II, with steady-state simulations conducted along 2 km intervals and unsteady simulations conducted near apogee. The results based on the compressible flow aerodynamics as well as Doppler shift and contamination effects are all examined, and are used to create correction functions based on the ratio of undisturbed to disturbed flowfield concentrations. The numerical simulations verify the experimental results showing the strong influence of rocket orientation on concentration, and show conclusive evidence pointing to the success of the correction functions to significantly minimize the external effects previously mentioned. In addition to the correction function approach, the optimal placement of the AO
Optimization of Monte Carlo trial moves for protein simulations.
Betancourt, Marcos R
2011-01-01
Closed rigid-body rotations of residue segments under bond-angle restraints are simple and effective Monte Carlo moves for searching the conformational space of proteins. The efficiency of these moves is examined here as a function of the number of moving residues and the magnitude of their displacement. It is found that the efficiency of folding and equilibrium simulations can be significantly improved by tailoring the distribution of the number of moving residues to the simulation temperature. In general, simulations exploring compact conformations are more efficient when the average number of moving residues is smaller. It is also demonstrated that the moves do not require additional restrictions on the magnitude of the rotation displacements and perform much better than other rotation moves that do not restrict the bond angles a priori. As an example, these results are applied to the replica exchange method. By assigning distributions that generate a smaller number of moving residues to lower temperature replicas, the simulation times are decreased as long as the higher temperature replicas are effective.
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy
2015-09-01
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Dirgayussa, I Gde Eka Yani, Sitti; Haryanto, Freddy; Rhani, M. Fahdillah
2015-09-30
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good
Monte Carlo simulator of realistic x-ray beam for diagnostic applications
Bontempi, Marco; Andreani, Lucia; Rossi, Pier Luca; Visani, Andrea
2010-08-15
Purpose: Monte Carlo simulation is a very useful tool for radiotherapy and diagnostic radiology. Yet even with the latest PCs, simulation of photon spectra emitted by an x-ray tube is a time-consuming task, potentially reducing the possibility to obtain relevant data such as dose evaluations, simulation of geometric settings, or monitor detector efficiency. This study developed and validated a method to generate random numbers for realistic beams in terms of photon spectrum and intensity to simulate x-ray tubes via Monte Carlo algorithms. Methods: Starting from literature data, the most common semiempirical models of bremsstrahlung are analyzed and implemented, adjusting their formulation to describe a large irradiation area (i.e., large field of view) and to take account of the heel effect as in common practice during patient examinations. Results: Simulation results show that Birch and Marshall's model is the fastest and most accurate for the aims of this work. Correction of the geometric size of the beam and validation of the intensity variation (heel effect) yielded excellent results with differences between experimental and simulated data of less than 6%. Conclusions: The results of validation and execution time showed that the tube simulator calculates the x-ray photons quickly and efficiently and is perfectly capable of considering all the phenomena occurring in a real beam (total filtration, focal spot size, and heel effect), so it can be used in a wide range of applications such as industry, medical physics, or quality assurance.
Monte Carlo simulation of turnover processes in the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
A Monte Carlo model for the gardening of the lunar surface by meteoritic impact is described, and some representative results are given. The model accounts with reasonable success for a wide variety of properties of the regolith. The smoothness of the lunar surface on a scale of centimeters to meters, which was not reproduced in an earlier version of the model, is accounted for by the preferential downward movement of low-energy secondary particles. The time scale for filling lunar grooves and craters by this process is also derived. The experimental bombardment ages (about 4 x 10 to the 8th yr for spallogenic rare gases, about 10 to the 9th yr for neutron capture Gd and Sm isotopes) are not reproduced by the model. The explanation is not obvious.
Million-Body Star Cluster Simulations: Comparisons between Monte Carlo and Direct N-body
NASA Astrophysics Data System (ADS)
Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer
2016-08-01
We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes "frozen" (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms
Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code
Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza
2013-01-01
The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique. PMID:24672765
Monte Carlo modeling of spallation targets containing uranium and americium
NASA Astrophysics Data System (ADS)
Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter
2014-09-01
Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on 241Am and 243Am nuclei allows to use this model for simulations with extended Am targets. It was demonstrated that MCADS model can be used for calculating the values of critical mass for 233,235U, 237Np, 239Pu and 241Am. Several geometry options and material compositions (U, U + Am, Am, Am2O3) are considered for spallation targets to be used in Accelerator Driven Systems. All considered options operate as deep subcritical targets having neutron multiplication factor of k∼0.5. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.
HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting
NASA Astrophysics Data System (ADS)
Zwart, Jonathan T. L.; Price, Daniel; Bernardi, Gianni
2016-06-01
HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.
Optimization of Monte Carlo transport simulations in stochastic media
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
Effect of doping of graphene structure: A Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Masrour, R.; Jabar, A.
2016-10-01
In this work, we have studied the effect of magnetic atom doping of graphene structure using Monte Carlo simulation. The reduced critical temperature with the magnetic atom doping x has been deduced from the thermal variation of magnetization and magnetic susceptibility. The variation of magnetization versus the crystal field of grapheme structure for different x and for different reduced temperatures has been established. We also have measured the coercive field (hC) as a function x in grapheme structure, finding that hC increases with increasing x concentration as predicted experimentally. The doping-induced magnetism in graphene. Magnetically atom doping in graphene systems are potential candidates for application in future spintronic devices, magnetometry requires macroscopic quantities of graphene to detect magnetic moments directly.
Vector Monte Carlo simulations on atmospheric scattering of polarization qubits.
Li, Ming; Lu, Pengfei; Yu, Zhongyuan; Yan, Lei; Chen, Zhihui; Yang, Chuanghua; Luo, Xiao
2013-03-01
In this paper, a vector Monte Carlo (MC) method is proposed to study the influence of atmospheric scattering on polarization qubits for satellite-based quantum communication. The vector MC method utilizes a transmittance method to solve the photon free path for an inhomogeneous atmosphere and random number sampling to determine whether the type of scattering is aerosol scattering or molecule scattering. Simulations are performed for downlink and uplink. The degrees and the rotations of polarization are qualitatively and quantitatively obtained, which agree well with the measured results in the previous experiments. The results show that polarization qubits are well preserved in the downlink and uplink, while the number of received single photons is less than half of the total transmitted single photons for both links. Moreover, our vector MC method can be applied for the scattering of polarized light in other inhomogeneous random media.
Kinetic Monte Carlo Simulation of Oxygen Diffusion in Ytterbium Disilicate
NASA Astrophysics Data System (ADS)
Good, Brian
2015-03-01
Ytterbium disilicate is of interest as a potential environmental barrier coating for aerospace applications, notably for use in next generation jet turbine engines. In such applications, the diffusion of oxygen and water vapor through these coatings is undesirable if high temperature corrosion is to be avoided. In an effort to understand the diffusion process in these materials, we have performed kinetic Monte Carlo simulations of vacancy-mediated oxygen diffusion in Ytterbium Disilicate. Oxygen vacancy site energies and diffusion barrier energies are computed using Density Functional Theory. We find that many potential diffusion paths involve large barrier energies, but some paths have barrier energies smaller than one electron volt. However, computed vacancy formation energies suggest that the intrinsic vacancy concentration is small in the pure material, with the result that the material is unlikely to exhibit significant oxygen permeability.
Monte Carlo simulations of ABC stacked kagome lattice films.
Yerzhakov, H V; Plumer, M L; Whitehead, J P
2016-05-18
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed.
Monte Carlo simulations of ABC stacked kagome lattice films
NASA Astrophysics Data System (ADS)
Yerzhakov, H. V.; Plumer, M. L.; Whitehead, J. P.
2016-05-01
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed.
Weijs, Liesbeth; Yang, Raymond S H; Das, Krishna; Covaci, Adrian; Blust, Ronny
2013-05-01
Physiologically based pharmacokinetic (PBPK) modeling in marine mammals is a challenge because of the lack of parameter information and the ban on exposure experiments. To minimize uncertainty and variability, parameter estimation methods are required for the development of reliable PBPK models. The present study is the first to develop PBPK models for the lifetime bioaccumulation of p,p'-DDT, p,p'-DDE, and p,p'-DDD in harbor porpoises. In addition, this study is also the first to apply the Bayesian approach executed with Markov chain Monte Carlo simulations using two data sets of harbor porpoises from the Black and North Seas. Parameters from the literature were used as priors for the first "model update" using the Black Sea data set, the resulting posterior parameters were then used as priors for the second "model update" using the North Sea data set. As such, PBPK models with parameters specific for harbor porpoises could be strengthened with more robust probability distributions. As the science and biomonitoring effort progress in this area, more data sets will become available to further strengthen and update the parameters in the PBPK models for harbor porpoises as a species anywhere in the world. Further, such an approach could very well be extended to other protected marine mammals.
Monte Carlo and analytic simulations in nanoparticle-enhanced radiation therapy
Paro, Autumn D; Hossain, Mainul; Webster, Thomas J; Su, Ming
2016-01-01
Analytical and Monte Carlo simulations have been used to predict dose enhancement factors in nanoparticle-enhanced X-ray radiation therapy. Both simulations predict an increase in dose enhancement in the presence of nanoparticles, but the two methods predict different levels of enhancement over the studied energy, nanoparticle materials, and concentration regime for several reasons. The Monte Carlo simulation calculates energy deposited by electrons and photons, while the analytical one only calculates energy deposited by source photons and photoelectrons; the Monte Carlo simulation accounts for electron–hole recombination, while the analytical one does not; and the Monte Carlo simulation randomly samples photon or electron path and accounts for particle interactions, while the analytical simulation assumes a linear trajectory. This study demonstrates that the Monte Carlo simulation will be a better choice to evaluate dose enhancement with nanoparticles in radiation therapy. PMID:27695329
Monte Carlo and analytic simulations in nanoparticle-enhanced radiation therapy
Paro, Autumn D; Hossain, Mainul; Webster, Thomas J; Su, Ming
2016-01-01
Analytical and Monte Carlo simulations have been used to predict dose enhancement factors in nanoparticle-enhanced X-ray radiation therapy. Both simulations predict an increase in dose enhancement in the presence of nanoparticles, but the two methods predict different levels of enhancement over the studied energy, nanoparticle materials, and concentration regime for several reasons. The Monte Carlo simulation calculates energy deposited by electrons and photons, while the analytical one only calculates energy deposited by source photons and photoelectrons; the Monte Carlo simulation accounts for electron–hole recombination, while the analytical one does not; and the Monte Carlo simulation randomly samples photon or electron path and accounts for particle interactions, while the analytical simulation assumes a linear trajectory. This study demonstrates that the Monte Carlo simulation will be a better choice to evaluate dose enhancement with nanoparticles in radiation therapy.
Characterization of parallel-hole collimator using Monte Carlo Simulation
Pandey, Anil Kumar; Sharma, Sanjay Kumar; Karunanithi, Sellam; Kumar, Praveen; Bal, Chandrasekhar; Kumar, Rakesh
2015-01-01
Objective: Accuracy of in vivo activity quantification improves after the correction of penetrated and scattered photons. However, accurate assessment is not possible with physical experiment. We have used Monte Carlo Simulation to accurately assess the contribution of penetrated and scattered photons in the photopeak window. Materials and Methods: Simulations were performed with Simulation of Imaging Nuclear Detectors Monte Carlo Code. The simulations were set up in such a way that it provides geometric, penetration, and scatter components after each simulation and writes binary images to a data file. These components were analyzed graphically using Microsoft Excel (Microsoft Corporation, USA). Each binary image was imported in software (ImageJ) and logarithmic transformation was applied for visual assessment of image quality, plotting profile across the center of the images and calculating full width at half maximum (FWHM) in horizontal and vertical directions. Results: The geometric, penetration, and scatter at 140 keV for low-energy general-purpose were 93.20%, 4.13%, 2.67% respectively. Similarly, geometric, penetration, and scatter at 140 keV for low-energy high-resolution (LEHR), medium-energy general-purpose (MEGP), and high-energy general-purpose (HEGP) collimator were (94.06%, 3.39%, 2.55%), (96.42%, 1.52%, 2.06%), and (96.70%, 1.45%, 1.85%), respectively. For MEGP collimator at 245 keV photon and for HEGP collimator at 364 keV were 89.10%, 7.08%, 3.82% and 67.78%, 18.63%, 13.59%, respectively. Conclusion: Low-energy general-purpose and LEHR collimator is best to image 140 keV photon. HEGP can be used for 245 keV and 364 keV; however, correction for penetration and scatter must be applied if one is interested to quantify the in vivo activity of energy 364 keV. Due to heavy penetration and scattering, 511 keV photons should not be imaged with HEGP collimator. PMID:25829730
Lucena, Sebastião M P; Mileo, Paulo G M; Silvino, Pedro F G; Cavalcante, Célio L
2011-12-01
The adsorption equilibrium of methane in PCN-14 was simulated by the Monte Carlo technique in the grand canonical ensemble. A new force field was proposed for the methane/PCN-14 system, and the temperature dependence of the molecular siting was investigated. A detailed study of the statistics of the center of mass and potential energy showed a surprising site behavior with no energy barriers between weak and strong sites, allowing open metal sites to guide methane molecules to other neighboring sites. Moreover, this study showed that a model assuming weakly adsorbing open metal clusters in PCN-14, densely populated only at low temperatures (below 150 K), can explain published experimental data. These results also explain previously observed discrepancies between neutron diffraction experiments and Monte Carlo simulations.
Monte Carlo simulation of ICRF discharge initiation in ITER
NASA Astrophysics Data System (ADS)
Tripský, M.; Wauters, T.; Lyssoivan, A.; Křivská, A.; Louche, F.; Van Schoor, M.; Noterdaeme, J.-M.
2015-12-01
Discharges produced and sustained by ion cyclotron range of frequency (ICRF) waves in absence of plasma current will be used on ITER for (ion cyclotron-) wall conditioning (ICWC). The here presented simulations aim at ensuring that the ITER ICRH&CD system can be safely employed for ICWC and at finding optimal parameters to initiate the plasma. The 1D Monte Carlo code RFdinity1D3V was developed to simulate ICRF discharge initiation. The code traces the electron motion along one toroidal magnetic field line, accelerated by the RF field in front of the ICRF antenna. Electron collisions in the calculations are handled by a Monte Carlo procedure taking into account their energies and the related electron collision cross sections for collisions with H2, H2+ and H+. The code also includes Coulomb collisions between electrons and ions (e - e, e - H2+ , e - H+). We study the electron multiplication rate as a function of the RF discharge parameters (i) antenna input power (0.1-5MW), and (ii) the neutral pressure (H2) for two antenna phasing (monopole [0000]-phasing and small dipole [0π0π]-phasing). Furthermore, we investigate the electron multiplication rate dependency on the distance from the antenna straps. This radial dependency results from the decreasing electric amplitude and field smoothening with increasing distance from the antenna straps. The numerical plasma breakdown definition used in the code corresponds to the moment when a critical electron density nec for the low hybrid resonance (ω = ωLHR) is reached. This numerical definition was previously found in qualitative agreement with experimental breakdown times obtained from the literature and from experiments on the ASDEX Upgrade and TEXTOR.
Longitudinal functional principal component modeling via Stochastic Approximation Monte Carlo
Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.
2010-01-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented. PMID:20689648
Direct simulation Monte Carlo and Navier-Stokes simulations of blunt body wake flows
NASA Astrophysics Data System (ADS)
Moss, James N.; Mitcheltree, Robert A.; Dogra, Virendra K.; Wilmoth, Richard G.
1994-07-01
Numerical results obtained with direct simulation Monte Carlo and Navier-Stokes methods are presented for a Mach-20 nitrogen flow about a 70-deg blunted cone. The flow conditions simuulated are those that can be obtained in existing low-density hypersonic wind tunnels. Three sets of flow conditions are considered with freestream Knudsen numbers ranging from 0.03 to 0.001. The focus is on the wake structure: how the wake structure changes as a function of rarefaction, what the afterbody levels of heating are, and to what limits the continuum models are realistic as rarefaction in the wake is progressively increased. Calculations are made with and without an afterbody sting. Results for the after body sting are emphasizes in anticipation of an experimental study for the current flow conditions and model configuration. The Navier-Stokes calculations were made with and without slip boundary conditions. Comparisons of the results obtained with the two simulation methodologies are made for both flowfield structure and surface quantities.
Direct simulation Monte Carlo and Navier-Stokes simulations of blunt body wake flows
NASA Technical Reports Server (NTRS)
Moss, James N.; Mitcheltree, Robert A.; Dogra, Virendra K.; Wilmoth, Richard G.
1994-01-01
Numerical results obtained with direct simulation Monte Carlo and Navier-Stokes methods are presented for a Mach-20 nitrogen flow about a 70-deg blunted cone. The flow conditions simuulated are those that can be obtained in existing low-density hypersonic wind tunnels. Three sets of flow conditions are considered with freestream Knudsen numbers ranging from 0.03 to 0.001. The focus is on the wake structure: how the wake structure changes as a function of rarefaction, what the afterbody levels of heating are, and to what limits the continuum models are realistic as rarefaction in the wake is progressively increased. Calculations are made with and without an afterbody sting. Results for the after body sting are emphasizes in anticipation of an experimental study for the current flow conditions and model configuration. The Navier-Stokes calculations were made with and without slip boundary conditions. Comparisons of the results obtained with the two simulation methodologies are made for both flowfield structure and surface quantities.
Majorana Positivity and the Fermion Sign Problem of Quantum Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Wei, Z. C.; Wu, Congjun; Li, Yi; Zhang, Shiwei; Xiang, T.
2016-06-01
The sign problem is a major obstacle in quantum Monte Carlo simulations for many-body fermion systems. We examine this problem with a new perspective based on the Majorana reflection positivity and Majorana Kramers positivity. Two sufficient conditions are proven for the absence of the fermion sign problem. Our proof provides a unified description for all the interacting lattice fermion models previously known to be free of the sign problem based on the auxiliary field quantum Monte Carlo method. It also allows us to identify a number of new sign-problem-free interacting fermion models including, but not limited to, lattice fermion models with repulsive interactions but without particle-hole symmetry, and interacting topological insulators with spin-flip terms.
Majorana Positivity and the Fermion Sign Problem of Quantum Monte Carlo Simulations.
Wei, Z C; Wu, Congjun; Li, Yi; Zhang, Shiwei; Xiang, T
2016-06-24
The sign problem is a major obstacle in quantum Monte Carlo simulations for many-body fermion systems. We examine this problem with a new perspective based on the Majorana reflection positivity and Majorana Kramers positivity. Two sufficient conditions are proven for the absence of the fermion sign problem. Our proof provides a unified description for all the interacting lattice fermion models previously known to be free of the sign problem based on the auxiliary field quantum Monte Carlo method. It also allows us to identify a number of new sign-problem-free interacting fermion models including, but not limited to, lattice fermion models with repulsive interactions but without particle-hole symmetry, and interacting topological insulators with spin-flip terms. PMID:27391709
Monte Carlo simulations on marker grouping and ordering.
Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C
2003-08-01
Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.
Scalable Metropolis Monte Carlo for simulation of hard shapes
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.
2016-07-01
We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.
Monte carlo simulation of carboxylic acid phase equilibria.
Clifford, Scott; Bolton, Kim; Ramjugernath, Deresh
2006-11-01
Configurational-bias Monte Carlo simulations were carried out in the Gibbs ensemble to generate phase equilibrium data for several carboxylic acids. Pure component coexistence densities and saturated vapor pressures were determined for acetic acid, propanoic acid, 2-methylpropanoic acid, and pentanoic acid, and binary vapor-liquid equilibrium (VLE) data for the propanoic acid + pentanoic acid and 2-methylpropanoic acid + pentanoic acid systems. The TraPPE-UA force field was used, as it has recently been extended to include parameters for carboxylic acids. To simulate the branched compound 2-methylpropanoic acid, certain minor assumptions were necessary regarding angle and torsion terms involving the -CH- pseudo-atom, since parameters for these terms do not exist in the TraPPE-UA force field. The pure component data showed good agreement with the available experimental data, particularly with regard to the saturated liquid densities (mean absolute errors were less than 1.1%). On average, the predicted critical temperature and density were within 1% of the experimental values. All of the binary simulations showed good agreement with the experimental x-y data. However, the TraPPE-UA force field predicts saturated vapor pressures of pure components that are larger than the experimental values, and consequently the P-x-y and T-x-y data of the binary systems also deviate from the measured data.
Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Radak, Brian K.; Roux, Benoît
2016-10-01
Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance. An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Finally, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.
Monte Carlo simulation of vapor transport in physical vapor deposition of titanium
Balakrishnan, Jitendra; Boyd, Iain D.; Braun, David G.
2000-05-01
In this work, the direct simulation Monte Carlo (DSMC) method is used to model the physical vapor deposition of titanium using electron-beam evaporation. Titanium atoms are vaporized from a molten pool at a very high temperature and are accelerated collisionally to the deposition surface. The electronic excitation of the vapor is significant at the temperatures of interest. Energy transfer between the electronic and translational modes of energy affects the flow significantly. The electronic energy is modeled in the DSMC method and comparisons are made between simulations in which electronic energy is excluded from and included among the energy modes of particles. The experimentally measured deposition profile is also compared to the results of the simulations. It is concluded that electronic energy is an important factor to consider in the modeling of flows of this nature. The simulation results show good agreement with experimental data. (c) 2000 American Vacuum Society.
NASA Astrophysics Data System (ADS)
Ballarini, F.; Fluka-Phantoms Team
Astronauts' exposure to space radiation is of major concern for long-term missions, especially for those in deep space such as a possible mission to Mars. Shielding optimization is therefore a crucial issue, and simulations based on radiation transport codes coupled with anthropomorphic model phantoms can be of great help. In this work, carried out with the FLUKA MC code and two anthropomorphic phantoms (a mathematical model and a "voxel" model), distributions of physical (i.e. absorbed), equivalent and "biological" dose in the various tissues and organs were calculated in different shielding conditions for solar minimum and solar maximum GCR spectra, as well as for the August 1972 Solar Particle Event. The biological dose was modeled as the average number of "Complex Lesions" (CL) per cell in a given organ. CLs are clustered DNA breaks previously calculated with "event-by-event" track structure simulations and integrated in the condensed-history FLUKA code. This approach is peculiar in that it is an example of a mechanistically-based quantification of the ionizing radiation action in biological targets; indeed CLs have been shown to play a fundamental role in chromosome aberration induction. The contributions of primary particles and secondary hadrons were calculated separately, thus allowing quantification of the role of nuclear reactions in the shield and in the human body. As expected, the doses calculated for the 1972 SPE decrease dramatically with increasing the Al shielding; nuclear reactions were found to be of minor importance, although their role is higher for internal organs and large shielding. An Al shield thickness of 10 g/cm2 appears sufficient to respect the 30-day deterministic limits recommended by NCRP for missions in Low Earth Orbit. In contrast with the results obtained for SPE, GCR doses to internal organs are not significantly lower than skin doses. However, the relative contribution of secondary hadrons was found to be more important for
Evaluation of effective dose with chest digital tomosynthesis system using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kim, Dohyeon; Jo, Byungdu; Lee, Youngjin; Park, Su-Jin; Lee, Dong-Hoon; Kim, Hee-Joung
2015-03-01
Chest digital tomosynthesis (CDT) system has recently been introduced and studied. This system offers the potential to be a substantial improvement over conventional chest radiography for the lung nodule detection and reduces the radiation dose with limited angles. PC-based Monte Carlo program (PCXMC) simulation toolkit (STUK, Helsinki, Finland) is widely used to evaluate radiation dose in CDT system. However, this toolkit has two significant limits. Although PCXMC is not possible to describe a model for every individual patient and does not describe the accurate X-ray beam spectrum, Geant4 Application for Tomographic Emission (GATE) simulation describes the various size of phantom for individual patient and proper X-ray spectrum. However, few studies have been conducted to evaluate effective dose in CDT system with the Monte Carlo simulation toolkit using GATE. The purpose of this study was to evaluate effective dose in virtual infant chest phantom of posterior-anterior (PA) view in CDT system using GATE simulation. We obtained the effective dose at different tube angles by applying dose actor function in GATE simulation which was commonly used to obtain the medical radiation dosimetry. The results indicated that GATE simulation was useful to estimate distribution of absorbed dose. Consequently, we obtained the acceptable distribution of effective dose at each projection. These results indicated that GATE simulation can be alternative method of calculating effective dose in CDT applications.
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less
MCViNE - An object oriented Monte Carlo neutron ray tracing simulation package
NASA Astrophysics Data System (ADS)
Lin, Jiao Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2016-02-01
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. With simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
Numerical simulations of blast-impact problems using the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Sharma, Anupam
There is an increasing need to design protective structures that can withstand or mitigate the impulsive loading due to the impact of a blast or a shock wave. A preliminary step in designing such structures is the prediction of the pressure loading on the structure. This is called the "load definition." This thesis is focused on a numerical approach to predict the load definition on arbitrary geometries for a given strength of the incident blast/shock wave. A particle approach, namely the Direct Simulation Monte Carlo (DSMC) method, is used as the numerical model. A three-dimensional, time-accurate DSMC flow solver is developed as a part of this study. Embedded surfaces, modeled as triangulations, are used to represent arbitrary-shaped structures. Several techniques to improve the computational efficiency of the algorithm of particle-structure interaction are presented. The code is designed using the Object Oriented Programming (OOP) paradigm. Domain decomposition with message passing is used to solve large problems in parallel. The solver is extensively validated against analytical results and against experiments. Two kinds of geometries, a box and an I-shaped beam are investigated for blast impact. These simulations are performed in both two- and three-dimensions. A major portion of the thesis is dedicated to studying the uncoupled fluid dynamics problem where the structure is assumed to remain stationary and intact during the simulation. A coupled, fluid-structure dynamics problem is solved in one spatial dimension using a simple, spring-mass-damper system to model the dynamics of the structure. A parametric study, by varying the mass, spring constant, and the damping coefficient, to study their effect on the loading and the displacement of the structure is also performed. Finally, the parallel performance of the solver is reported for three sample-size problems on two Beowulf clusters.
Three-dimensional hypersonic rarefied flow calculations using direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Celenligil, M. Cevdet; Moss, James N.
1993-01-01
A summary of three-dimensional simulations on the hypersonic rarefied flows in an effort to understand the highly nonequilibrium flows about space vehicles entering the Earth's atmosphere for a realistic estimation of the aerothermal loads is presented. Calculations are performed using the direct simulation Monte Carlo method with a five-species reacting gas model, which accounts for rotational and vibrational internal energies. Results are obtained for the external flows about various bodies in the transitional flow regime. For the cases considered, convective heating, flowfield structure and overall aerodynamic coefficients are presented and comparisons are made with the available experimental data. The agreement between the calculated and measured results are very good.
Urbic, T; Holovko, M F
2011-10-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.
Urbic, T.; Holovko, M. F.
2011-01-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes–Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334
NASA Astrophysics Data System (ADS)
Urbic, T.; Holovko, M. F.
2011-10-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.
Prediction of thermodynamic properties of natural gases using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Piyanzina, I.; Lysogorskiy, Yu; Nedopekin, O.
2012-11-01
In this paper an applications of Monte-Carlo simulation in natural gas production is presented. We have investigated model of natural gas of the Bavlinskoye deposit located in the southeast of the Republic of Tatarstan. For this natural gas and for pure methane and ethane gases we have obtained thermal expansivity, isothermal compressibility, compressibility factor, heat capacity, Joule-Thompson coefficient and density at pressures up to 110 MPa at deposit temperature (463 K). Also we have obtained vapor pressures and liquid-vapor phase diagrams. Simulated properties for methane are in a good agreement with available experimental data.
Urbic, T; Holovko, M F
2011-10-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334
Comparison of neutron diffusion and Monte Carlo models for a fission wave
Osborne, A. G.; Deinert, M. R.
2013-07-01
Many groups have used neutron diffusion simulations to study fission wave phenomena in natural or depleted uranium. However, few studies of fission wave phenomena have been published that use Monte Carlo simulations to confirm the results of diffusion models for this type of system. In the present work we show the results of a criticality and burnup simulation of a traveling wave reactor using MCNPX 2.7.0. The characteristics of the fission wave in this simulation are compared with those from a simple one-dimensional, one-group neutron diffusion model. The diffusion simulations produce a wave speed of 5.9 cm/yr versus 5.3 cm/yr for the Monte Carlo simulations. The axial flux profile in the Monte Carlo simulation is similar in shape to the diffusion results, but with different peak values, and the two profiles have an R2 value of 0.93. The {sup 238}U, {sup 239}Np and {sup 239}Pu burnup profiles from the diffusion simulation show good agreement with the Monte Carlo simulations, R values of 0.98, 0.93 and 0.97 respectively are observed. (authors)
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches. PMID:24298424
NASA Astrophysics Data System (ADS)
Yasuda, Shinya; Todo, Synge
2013-12-01
We present a method that optimizes the aspect ratio of a spatially anisotropic quantum lattice model during the quantum Monte Carlo simulation, and realizes the virtually isotropic lattice automatically. The anisotropy is removed by using the Robbins-Monro algorithm based on the correlation length in each direction. The method allows for comparing directly the value of the critical amplitude among different anisotropic models, and identifying the universality more precisely. We apply our method to the staggered dimer antiferromagnetic Heisenberg model and demonstrate that the apparent nonuniversal behavior is attributed mainly to the strong size correction of the effective aspect ratio due to the existence of the cubic interaction.
A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, G.C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
Hofmann, Marco; Lux, Robert; Schultz, Hans R
2014-01-01
Grapes for wine production are a highly climate sensitive crop and vineyard water budget is a decisive factor in quality formation. In order to conduct risk assessments for climate change effects in viticulture models are needed which can be applied to complete growing regions. We first modified an existing simplified geometric vineyard model of radiation interception and resulting water use to incorporate numerical Monte Carlo simulations and the physical aspects of radiation interactions between canopy and vineyard slope and azimuth. We then used four regional climate models to assess for possible effects on the water budget of selected vineyard sites up 2100. The model was developed to describe the partitioning of short-wave radiation between grapevine canopy and soil surface, respectively, green cover, necessary to calculate vineyard evapotranspiration. Soil water storage was allocated to two sub reservoirs. The model was adopted for steep slope vineyards based on coordinate transformation and validated against measurements of grapevine sap flow and soil water content determined down to 1.6 m depth at three different sites over 2 years. The results showed good agreement of modeled and observed soil water dynamics of vineyards with large variations in site specific soil water holding capacity (SWC) and viticultural management. Simulated sap flow was in overall good agreement with measured sap flow but site-specific responses of sap flow to potential evapotranspiration were observed. The analyses of climate change impacts on vineyard water budget demonstrated the importance of site-specific assessment due to natural variations in SWC. The improved model was capable of describing seasonal and site-specific dynamics in soil water content and could be used in an amended version to estimate changes in the water budget of entire grape growing areas due to evolving climatic changes. PMID:25540646
Hofmann, Marco; Lux, Robert; Schultz, Hans R
2014-01-01
Grapes for wine production are a highly climate sensitive crop and vineyard water budget is a decisive factor in quality formation. In order to conduct risk assessments for climate change effects in viticulture models are needed which can be applied to complete growing regions. We first modified an existing simplified geometric vineyard model of radiation interception and resulting water use to incorporate numerical Monte Carlo simulations and the physical aspects of radiation interactions between canopy and vineyard slope and azimuth. We then used four regional climate models to assess for possible effects on the water budget of selected vineyard sites up 2100. The model was developed to describe the partitioning of short-wave radiation between grapevine canopy and soil surface, respectively, green cover, necessary to calculate vineyard evapotranspiration. Soil water storage was allocated to two sub reservoirs. The model was adopted for steep slope vineyards based on coordinate transformation and validated against measurements of grapevine sap flow and soil water content determined down to 1.6 m depth at three different sites over 2 years. The results showed good agreement of modeled and observed soil water dynamics of vineyards with large variations in site specific soil water holding capacity (SWC) and viticultural management. Simulated sap flow was in overall good agreement with measured sap flow but site-specific responses of sap flow to potential evapotranspiration were observed. The analyses of climate change impacts on vineyard water budget demonstrated the importance of site-specific assessment due to natural variations in SWC. The improved model was capable of describing seasonal and site-specific dynamics in soil water content and could be used in an amended version to estimate changes in the water budget of entire grape growing areas due to evolving climatic changes.
Hofmann, Marco; Lux, Robert; Schultz, Hans R.
2014-01-01
Grapes for wine production are a highly climate sensitive crop and vineyard water budget is a decisive factor in quality formation. In order to conduct risk assessments for climate change effects in viticulture models are needed which can be applied to complete growing regions. We first modified an existing simplified geometric vineyard model of radiation interception and resulting water use to incorporate numerical Monte Carlo simulations and the physical aspects of radiation interactions between canopy and vineyard slope and azimuth. We then used four regional climate models to assess for possible effects on the water budget of selected vineyard sites up 2100. The model was developed to describe the partitioning of short-wave radiation between grapevine canopy and soil surface, respectively, green cover, necessary to calculate vineyard evapotranspiration. Soil water storage was allocated to two sub reservoirs. The model was adopted for steep slope vineyards based on coordinate transformation and validated against measurements of grapevine sap flow and soil water content determined down to 1.6 m depth at three different sites over 2 years. The results showed good agreement of modeled and observed soil water dynamics of vineyards with large variations in site specific soil water holding capacity (SWC) and viticultural management. Simulated sap flow was in overall good agreement with measured sap flow but site-specific responses of sap flow to potential evapotranspiration were observed. The analyses of climate change impacts on vineyard water budget demonstrated the importance of site-specific assessment due to natural variations in SWC. The improved model was capable of describing seasonal and site-specific dynamics in soil water content and could be used in an amended version to estimate changes in the water budget of entire grape growing areas due to evolving climatic changes. PMID:25540646
Investigation of radiative interaction in laminar flows using Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is employed to study the radiative interactions in fully developed laminar flow between two parallel plates. Taking advantage of the characteristics of easy mathematical treatment of the MCM, a general numerical procedure is developed for nongray radiative interaction. The nongray model is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. To validate the Monte Carlo simulation for nongray radiation problems, the results of radiative dissipation from the MCM are compared with two available solutions for a given temperature profile between two plates. After this validation, the MCM is employed to solve the present physical problem and results for the bulk temperature are compared with available solutions. In general, good agreement is noted and reasons for some discrepancies in certain ranges of parameters are explained.
Molecular simulation of shocked materials using the reactive Monte Carlo method
NASA Astrophysics Data System (ADS)
Brennan, John K.; Rice, Betsy M.
2002-08-01
We demonstrate the applicability of the reactive Monte Carlo (RxMC) simulation method [J. K. Johnson, A. Z. Panagiotopoulos, and K. E. Gubbins, Mol. Phys. 81, 717 (1994); W. R. Smith and B. Tříska, J. Chem. Phys. 100, 3019 (1994)] for calculating the shock Hugoniot properties of a material. The method does not require interaction potentials that simulate bond breaking or bond formation; it requires only the intermolecular potentials and the ideal-gas partition functions for the reactive species that are present. By performing Monte Carlo sampling of forward and reverse reaction steps, the RxMC method provides information on the chemical equilibria states of the shocked material, including the density of the reactive mixture and the mole fractions of the reactive species. We illustrate the methodology for two simple systems (shocked liquid NO and shocked liquid N2), where we find excellent agreement with experimental measurements. The results show that the RxMC methodology provides an important simulation tool capable of testing models used in current detonation theory predictions. Further applications and extensions of the reactive Monte Carlo method are discussed.
Direct simulation Monte Carlo investigation of the Rayleigh-Taylor instability
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.; Plimpton, S. J.
2016-08-01
The Rayleigh-Taylor instability (RTI) is investigated using the direct simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, fully resolved two-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode perturbed interfaces between two atmospheric-pressure monatomic gases as a function of the Atwood number and the gravitational acceleration. The DSMC simulations reproduce many qualitative features of the growth of the mixing layer and are in reasonable quantitative agreement with theoretical and empirical models in the linear, nonlinear, and self-similar regimes. In some of the simulations at late times, the instability enters the self-similar regime, in agreement with experimental observations. For the conditions simulated, diffusion can influence the initial instability growth significantly.
Monte Carlo simulations for BNL RHIC spin physics
Guellenstern, S. ); Gornicki, P. ); Mankiewicz, L. ); Schaefer, A. )
1995-04-01
Direct photon production in longitudinally polarized proton-proton collisions offers the most direct and unproblematic possibility for determining the polarized gluon distribution of a proton. This information could play a major role for improving our understanding of the nucleon structure and QCD in general. It is hoped that such experiments will be done at the BNL RHIC. We present results of detailed Monte Carlo simulations using a code called SPHINX. We find that for RHIC energies and large gluon polarization the Compton graph dominates allowing for a direct test of [Delta][ital g]. Triggering on away-side jets with the envisaged jet criteria should allow to obtain more detailed information on [Delta][ital g]([ital x]). The photon asymmetry resulting from the asymmetry of produced [pi][sup 0]'s provides an additional signal, which is complementary to the other two. For small gluon polarization, i.e., [Delta][ital g][le]0.5, or very soft polarized gluon distributions the envisaged experiments will require a highly sophisticated simulation and large statistics to extract more than upper bounds for [vert bar][Delta][ital g]([ital x])[vert bar].
A Monte Carlo simulation technique for low-altitude, wind-shear turbulence
NASA Technical Reports Server (NTRS)
Bowles, Roland L.; Laituri, Tony R.; Trevino, George
1990-01-01
A case is made for including anisotropy in a Monte Carlo flight simulation scheme of low-altitude wind-shear turbulence by means of power spectral density. This study attempts to eliminate all flight simulation-induced deficiencies in the basic turbulence model. A full-scale low-altitude wind-shear turbulence simulation scheme is proposed with particular emphasis on low cost and practicality for near-ground flight. The power spectral density statistic is used to highlight the need for realistic estimates of energy transfer associated with low-altitude wind-shear turbulence. The simulation of a particular anisotropic turbulence model is shown to be a relatively simple extension from that of traditional isotropic (Dryden) turbulence.
Post-processing of Monte Carlo simulations for rapid BNCT source optimization studies
Bleuel, D.L.; Chu, W.T.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.
2000-10-01
A great advantage of some neutron sources, such as accelerator-produced sources, is that they can be tuned to produce different spectra. Unfortunately, optimization studies are often time-consuming and difficult, as they require a lengthy Monte Carlo simulation for each source. When multiple characteristics, such as energy, angle, and spatial distribution of a neutron beam are allowed to vary, an overwhelming number of simulations may be required. Many optimization studies, therefore, suffer from a small number of datapoints, restrictive treatment conditions, or poor statistics. By scoring pertinent information from every particle tally in a Monte Carlo simulation, then applying appropriate source variable weight factors in a post-processing algorithm, a single simulation can be used to model any number of multiple sources. Through this method, the response to a new source can be modeled in minutes or seconds, rather than hours or days, allowing for the analysis of truly variable source conditions of much greater resolution than is normally possible when a new simulation must be run for each datapoint in a study. This method has been benchmarked and used to recreate optimization studies in a small fraction of the time spent in the original studies.