Improved Collision Modeling for Direct Simulation Monte Carlo Methods
2011-03-01
number is a measure of the rarefaction of a gas , and will be explained more thoroughly in the following chap- ter. Continuum solvers that use Navier...Limits on Mathematical Models [4] Kn=0.1, and the flow can be considered rarefied above that value. Direct Simulation Monte Carlo (DSMC) is a stochastic...method which utilizes the Monte Carlo statistical model to simulate gas behavior, which is very useful for these rarefied atmosphere hypersonic
Kinetic plasma modeling with quiet Monte Carlo direct simulation.
Albright, B. J.; Jones, M. E.; Lemons, D. S.; Winske, D.
2001-01-01
The modeling of collisions among particles in space plasma media poses a challenge for computer simulation. Traditional plasma methods are able to model well the extremes of highly collisional plasmas (MHD and Hall-MHD simulations) and collisionless plasmas (particle-in-cell simulations). However, neither is capable of trealing the intermediate, semi-collisional regime. The authors have invented a new approach to particle simulation called Quiet Monte Carlo Direct Simulation (QMCDS) that can, in principle, treat plasmas with arbitrary and arbitrarily varying collisionality. The QMCDS method will be described, and applications of the QMCDS method as 'proof of principle' to diffusion, hydrodynamics, and radiation transport will be presented. Of particular interest to the space plasma simulation community is the application of QMCDS to kinetic plasma modeling. A method for QMCDS simulation of kinetic plasmas will be outlined, and preliminary results of simulations in the limit of weak pitch-angle scattering will be presented.
Drag coefficient modeling for grace using Direct Simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; McLaughlin, Craig A.; Sutton, Eric K.
2013-12-01
Drag coefficient is a major source of uncertainty in predicting the orbit of a satellite in low Earth orbit (LEO). Computational methods like the Test Particle Monte Carlo (TPMC) and Direct Simulation Monte Carlo (DSMC) are important tools in accurately computing physical drag coefficients. However, the methods are computationally expensive and cannot be employed real time. Therefore, modeling of the physical drag coefficient is required. This work presents a technique of developing parameterized drag coefficients models using the DSMC method. The technique is validated by developing a model for the Gravity Recovery and Climate Experiment (GRACE) satellite. Results show that drag coefficients computed using the developed model for GRACE agree to within 1% with those computed using DSMC.
Monte Carlo simulation of photon scattering in biological tissue models.
Kumar, D; Chacko, S; Singh, M
1999-10-01
Monte Carlo simulation of photon scattering, with and without abnormal tissue placed at various locations in the rectangular, semi-circular and semi-elliptical tissue models, has been carried out. The absorption coefficient of the tissue considered as abnormal is high and its scattering coefficient low compared to that of the control tissue. The placement of the abnormality at various locations within the models affects the transmission and surface emission of photons at various locations. The scattered photons originating from deeper layers make the maximum contribution at farther distances from the beam entry point. The contribution of various layers to photon scattering provides valuable data on variability of internal composition. Introduction.
Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models
NASA Astrophysics Data System (ADS)
Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti
2016-10-01
A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Optimising muscle parameters in musculoskeletal models using Monte Carlo simulation.
Reed, Erik B; Hanson, Andrea M; Cavanagh, Peter R
2015-01-01
The use of musculoskeletal simulation software has become a useful tool for modelling joint and muscle forces during human activity, including in reduced gravity because direct experimentation is difficult. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler™ (San Clemente, CA, USA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces but no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. The rectus femoris was predicted to peak at 60.1% activation in the same test case compared to 19.2% activation using default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Hopping electron model with geometrical frustration: kinetic Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Terao, Takamichi
2016-09-01
The hopping electron model on the Kagome lattice was investigated by kinetic Monte Carlo simulations, and the non-equilibrium nature of the system was studied. We have numerically confirmed that aging phenomena are present in the autocorrelation function C ({t,tW )} of the electron system on the Kagome lattice, which is a geometrically frustrated lattice without any disorder. The waiting-time distributions p(τ ) of hopping electrons of the system on Kagome lattice has been also studied. It is confirmed that the profile of p (τ ) obtained at lower temperatures obeys the power-law behavior, which is a characteristic feature of continuous time random walk of electrons. These features were also compared with the characteristics of the Coulomb glass model, used as a model of disordered thin films and doped semiconductors. This work represents an advance in the understanding of the dynamics of geometrically frustrated systems and will serve as a basis for further studies of these physical systems.
Modeling low-coherence enhanced backscattering using Monte Carlo simulation.
Subramanian, Hariharan; Pradhan, Prabhakar; Kim, Young L; Liu, Yang; Li, Xu; Backman, Vadim
2006-08-20
Constructive interference between coherent waves traveling time-reversed paths in a random medium gives rise to the enhancement of light scattering observed in directions close to backscattering. This phenomenon is known as enhanced backscattering (EBS). According to diffusion theory, the angular width of an EBS cone is proportional to the ratio of the wavelength of light lambda to the transport mean-free-path length l(s)* of a random medium. In biological media a large l(s)* approximately 0.5-2 mm > lambda results in an extremely small (approximately 0.001 degrees ) angular width of the EBS cone, making the experimental observation of such narrow peaks difficult. Recently, the feasibility of observing EBS under low spatial coherence illumination (spatial coherence length Lsc < l(s)*) was demonstrated. Low spatial coherence behaves as a spatial filter rejecting longer path lengths and thus resulting in an increase of more than 100 times in the angular width of low coherence EBS (LEBS) cones. However, a conventional diffusion approximation-based model of EBS has not been able to explain such a dramatic increase in LEBS width. We present a photon random walk model of LEBS by using Monte Carlo simulation to elucidate the mechanism accounting for the unprecedented broadening of the LEBS peaks. Typically, the exit angles of the scattered photons are not considered in modeling EBS in the diffusion regime. We show that small exit angles are highly sensitive to low-order scattering, which is crucial for accurate modeling of LEBS. Our results show that the predictions of the model are in excellent agreement with the experimental data.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Modeling focusing Gaussian beams in a turbid medium with Monte Carlo simulations.
Hokr, Brett H; Bixler, Joel N; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J; Yakovlev, Vladislav V; Scully, Marlan O
2015-04-06
Monte Carlo techniques are the gold standard for studying light propagation in turbid media. Traditional Monte Carlo techniques are unable to include wave effects, such as diffraction; thus, these methods are unsuitable for exploring focusing geometries where a significant ballistic component remains at the focal plane. Here, a method is presented for accurately simulating photon propagation at the focal plane, in the context of a traditional Monte Carlo simulation. This is accomplished by propagating ballistic photons along trajectories predicted by Gaussian optics until they undergo an initial scattering event, after which, they are propagated through the medium by a traditional Monte Carlo technique. Solving a known problem by building upon an existing Monte Carlo implementation allows this method to be easily implemented in a wide variety of existing Monte Carlo simulations, greatly improving the accuracy of those models for studying dynamics in a focusing geometry.
Modeling root-reinforcement with a Fiber-Bundle Model and Monte Carlo simulation
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper uses sensitivity analysis and a Fiber-Bundle Model (FBM) to examine assumptions underpinning root-reinforcement models. First, different methods for apportioning load between intact roots were investigated. Second, a Monte Carlo approach was used to simulate plants with heartroot, platero...
Monte Carlo simulation of classical spin models with chaotic billiards.
Suzuki, Hideyuki
2013-11-01
It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.
Monte Carlo simulation of classical spin models with chaotic billiards
NASA Astrophysics Data System (ADS)
Suzuki, Hideyuki
2013-11-01
It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.
Analytical positron range modelling in heterogeneous media for PET Monte Carlo simulation.
Lehnert, Wencke; Gregoire, Marie-Claude; Reilhac, Anthonin; Meikle, Steven R
2011-06-07
Monte Carlo simulation codes that model positron interactions along their tortuous path are expected to be accurate but are usually slow. A simpler and potentially faster approach is to model positron range from analytical annihilation density distributions. The aims of this paper were to efficiently implement and validate such a method, with the addition of medium heterogeneity representing a further challenge. The analytical positron range model was evaluated by comparing annihilation density distributions with those produced by the Monte Carlo simulator GATE and by quantitatively analysing the final reconstructed images of Monte Carlo simulated data. In addition, the influence of positronium formation on positron range and hence on the performance of Monte Carlo simulation was investigated. The results demonstrate that 1D annihilation density distributions for different isotope-media combinations can be fitted with Gaussian functions and hence be described by simple look-up-tables of fitting coefficients. Together with the method developed for simulating positron range in heterogeneous media, this allows for efficient modelling of positron range in Monte Carlo simulation. The level of agreement of the analytical model with GATE depends somewhat on the simulated scanner and the particular research task, but appears to be suitable for lower energy positron emitters, such as (18)F or (11)C. No reliable conclusion about the influence of positronium formation on positron range and simulation accuracy could be drawn.
A new Monte Carlo simulation model for laser transmission in smokescreen based on MATLAB
NASA Astrophysics Data System (ADS)
Lee, Heming; Wang, Qianqian; Shan, Bin; Li, Xiaoyang; Gong, Yong; Zhao, Jing; Peng, Zhong
2016-11-01
A new Monte Carlo simulation model of laser transmission in smokescreen is promoted in this paper. In the traditional Monte Carlo simulation model, the radius of particles is set at the same value and the initial cosine value of photons direction is fixed also, which can only get the approximate result. The new model is achieved based on MATLAB and can simulate laser transmittance in smokescreen with different sizes of particles, and the output result of the model is close to the real scenarios. In order to alleviate the influence of the laser divergence while traveling in the air, we changed the initial direction cosine of photons on the basis of the traditional Monte Carlo model. The mixed radius particle smoke simulation results agree with the measured transmittance under the same experimental conditions with 5.42% error rate.
Modeling of hysteresis loops by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Nehme, Z.; Labaye, Y.; Sayed Hassan, R.; Yaacoub, N.; Greneche, J. M.
2015-12-01
Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nano)architectures with different anisotropy contributions.
Monte Carlo simulations of Landau-Ginzburg model for membranes
NASA Astrophysics Data System (ADS)
Koibuchi, Hiroshi; Shobukhov, Andrey
2014-02-01
The Landau-Ginzburg (LG) model for membranes is numerically studied on triangulated spheres in R3. The LG model is in sharp contrast to the model of Helfrich-Polyakov (HP). The reason for this difference is that the curvature energy of the LG (HP) Hamiltonian is defined by means of the tangential (normal) vector of the surface. For this reason, the curvature energy of the LG model includes the in-plane bending or shear energy component, which is not included in the curvature energy of the HP model. From the simulation data, we find that the LG model undergoes a first-order collapse transition. The results of the LG model in the higher-dimensional spaces Rd(d > 3) and on the self-avoiding (SA) surfaces in R3 are presented and discussed. We also study the David-Guitter (DG) model, which is a variant of the LG model, and find that the DG model undergoes a first-order transition. It is also found that the transition can be observed only on the homogeneous surfaces, which are composed of almost uniform triangles according to the condition that the induced metric ∂ar ṡ ∂br is close to δab.
ERIC Educational Resources Information Center
Kim, Su-Young
2012-01-01
Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…
Monte Carlo Simulations of Compressible Ising Models: Do We Understand Them?
NASA Astrophysics Data System (ADS)
Landau, D. P.; Dünweg, B.; Laradji, M.; Tavazza, F.; Adler, J.; Cannavaccioulo, L.; Zhu, X.
Extensive Monte Carlo simulations have begun to shed light on our understanding of phase transitions and universality classes for compressible Ising models. A comprehensive analysis of a Landau-Ginsburg-Wilson hamiltonian for systems with elastic degrees of freedom resulted in the prediction that there should be four distinct cases that would have different behavior, depending upon symmetries and thermodynamic constraints. We shall provide an account of the results of careful Monte Carlo simulations for a simple compressible Ising model that can be suitably modified so as to replicate all four cases.
Inclusion of coherence in Monte Carlo models for simulation of x-ray phase contrast imaging.
Cipiccia, Silvia; Vittoria, Fabio A; Weikum, Maria; Olivo, Alessandro; Jaroszynski, Dino A
2014-09-22
Interest in phase contrast imaging methods based on electromagnetic wave coherence has increased significantly recently, particularly at X-ray energies. This is giving rise to a demand for effective simulation methods. Coherent imaging approaches are usually based on wave optics, which require significant computational resources, particularly for producing 2D images. Monte Carlo (MC) methods, used to track individual particles/photons for particle physics, are not considered appropriate for describing coherence effects. Previous preliminary work has evaluated the possibility of incorporating coherence in Monte Carlo codes. However, in this paper, we present the implementation of refraction in a model that is based on time of flight calculations and the Huygens-Fresnel principle, which allow reproducing the formation of phase contrast images in partially and fully coherent experimental conditions. The model is implemented in the FLUKA Monte Carlo code and X-ray phase contrast imaging simulations are compared with experiments and wave optics calculations.
Accelerated Monte Carlo models to simulate fluorescence spectra from layered tissues.
Swartling, Johannes; Pifferi, Antonio; Enejder, Annika M K; Andersson-Engels, Stefan
2003-04-01
Two efficient Monte Carlo models are described, facilitating predictions of complete time-resolved fluorescence spectra from a light-scattering and light-absorbing medium. These are compared with a third, conventional fluorescence Monte Carlo model in terms of accuracy, signal-to-noise statistics, and simulation time. The improved computation efficiency is achieved by means of a convolution technique, justified by the symmetry of the problem. Furthermore, the reciprocity principle for photon paths, employed in one of the accelerated models, is shown to simplify the computations of the distribution of the emitted fluorescence drastically. A so-called white Monte Carlo approach is finally suggested for efficient simulations of one excitation wavelength combined with a wide range of emission wavelengths. The fluorescence is simulated in a purely scattering medium, and the absorption properties are instead taken into account analytically afterward. This approach is applicable to the conventional model as well as to the two accelerated models. Essentially the same absolute values for the fluorescence integrated over the emitting surface and time are obtained for the three models within the accuracy of the simulations. The time-resolved and spatially resolved fluorescence exhibits a slight overestimation at short delay times close to the source corresponding to approximately two grid elements for the accelerated models, as a result of the discretization and the convolution. The improved efficiency is most prominent for the reverse-emission accelerated model, for which the simulation time can be reduced by up to two orders of magnitude.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg–Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models
NASA Astrophysics Data System (ADS)
Si, Lisha; Liao, Xiaoyun; Zhou, Nengji
2016-12-01
With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
NASA Astrophysics Data System (ADS)
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave
NASA Astrophysics Data System (ADS)
Yasuda, Shugo
2017-02-01
A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.
NASA Astrophysics Data System (ADS)
Flicstein, Jean; Pata, S.; Chun, L. S. H. K.; Palmier, Jean F.; Courant, J. L.
1998-05-01
A model for ultraviolet induced chemical vapor deposition (UV CVD) for a-SiN:H is described. In the simulation of UV CVD process, activate charged centers creation, species incorporation, surface diffusion, and desorption are considered as elementary steps for the photonucleation and photodeposition mechanisms. The process is characterized by two surface sticking coefficients. Surface diffusion of species is modeled with a gaussian distribution. A real time Monte Carlo method is used to determine photonucleation and photodeposition rates in nanostructures. Comparison of experimental versus simulation results for a-SiN:H is shown to predict the morphology temporal evolution under operating conditions down to atomistic resolution.
Multicanonical Monte Carlo simulations of anisotropic SU(3) and SU(4) Heisenberg models
NASA Astrophysics Data System (ADS)
Harada, Kenji; Kawashima, Naoki; Troyer, Matthias
2009-03-01
We present the results of multicanonical Monte Carlo simulations on two-dimensional anisotropic SU(3) and SU(4) Heisenberg models. In our previous study [K. Harada, et al., J. Phys. Soc. Jpn. 76, 013703 (2007)], we found evidence for a direct quantum phase transition from the valence-bond-solid(VBS) phase to the SU(3) symmetry breaking phase on the SU(3) model and we proposed the possibility of deconfined critical phenomena (DCP) [T. Senthil, et al., Science 303, 1490 (2004); T. Grover and T. Senthil, Phys. Rev. Lett. 98, 247202 (2007)]. Here we will present new results with an improved algorithm, using a multicanonical Monte Carlo algorithm. Using a flow method-like technique [A.B. Kuklov, et al., Annals of Physics 321, 1602 (2006)], we discuss the possibility of DCP in both models.
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Monte Carlo simulation of Prussian blue analogs described by Heisenberg ternary alloy model
NASA Astrophysics Data System (ADS)
Yüksel, Yusuf
2015-11-01
Within the framework of Monte Carlo simulation technique, we simulate magnetic behavior of Prussian blue analogs based on Heisenberg ternary alloy model. We present phase diagrams in various parameter spaces, and we compare some of our results with those based on Ising counterparts. We clarify the variations of transition temperature and compensation phenomenon with mixing ratio of magnetic ions, exchange interactions, and exchange anisotropy in the present ferro-ferrimagnetic Heisenberg system. According to our results, thermal variation of the total magnetization curves may exhibit N, L, P, Q, R type behaviors based on the Néel classification scheme.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
NASA Astrophysics Data System (ADS)
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
Modeling of near-continuum flows using the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Lohn, P. D.; Haflinger, D. E.; McGregor, R. D.; Behrens, H. W.
1990-06-01
The direct simulation Monte Carlo (DSMC) method is used to model the flow of a hypersonic stream about a wedge. The Knudsen number of 0.00075 puts the flow into the continuum category and hence is a challenge for the DSMC method. The modeled flowfield is shown to agree extremely well with the experimental measurements in the wedge wake taken by Batt (1967). This experimental confirmation serves as a rigorous validation of the DSMC method and provides guidelines for computations of near-continuum flows.
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
NASA Astrophysics Data System (ADS)
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.
AO modelling for wide-field E-ELT instrumentation using Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Basden, Alastair; Morris, Simon; Morris, Tim; Myers, Richard
2014-08-01
Extensive simulations of AO performance for several E-ELT instruments (including EAGLE, MOSAIC, HIRES and MAORY) have been ongoing using the Monte-Carlo Durham AO Simulation Package. We present the latest simulation results, including studies into DM requirements, dependencies of performance on asterism, detailed point spread function generation, accurate telescope modelling, and studies of laser guide star effects. Details of simulations will be given, including the use of optical models of the E-ELT to generate wave- front sensor pupil illumination functions, laser guide star modelling, and investigations of different many-layer atmospheric profiles. We discuss issues related to ELT-scale simulation, how we have overcome these, and how we will be approaching forthcoming issues such as modelling of advanced wavefront control, multi-rate wavefront sensing, and advanced treatment of extended laser guide star spots. We also present progress made on integrating simulation with AO real-time control systems. The impact of simulation outcomes on instrument design studies will be discussed, and the ongoing work plan presented.
Fast Off-Lattice Monte Carlo Simulations with a Novel Soft-Core Spherocylinder Model
NASA Astrophysics Data System (ADS)
Zong, Jing; Zhang, Xinghua; Wang, Qiang (David)
2011-03-01
Fast off-lattice Monte Carlo simulations with soft-core repulsive potentials that allow particle overlapping give orders of magnitude faster/better sampling of the configurational space than conventional molecular simulations with hard-core repulsions (such as in the Lennard-Jones potential). Here we present our fast off-lattice Monte Carlo simulations on the structures and phase transitions of liquid crystals and rod-coil diblock copolymers based on a novel and computationally efficient anisotropic soft-core potential that gives exact treatment of the excluded-volume interactions between two spherocylinders (thus the orientational interaction between them favoring their parallel alignment). Our model further takes into account the degree of overlap of two spherocylinders, thus superior to other soft-core models that depend only on their shortest distance. It has great potential applications in the study of liquid crystals, block copolymers containing rod blocks, and liquid crystalline polymers. Q. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009).
Fission yield calculation using toy model based on Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jubaidah, Kurniadi, Rizal
2015-09-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah; Kurniadi, Rizal
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
A geometrical model for the Monte Carlo simulation of the TrueBeam linac.
Rodriguez, M; Sempau, J; Fogliata, A; Cozzi, L; Sauerwein, W; Brualla, L
2015-06-07
Monte Carlo simulation of linear accelerators (linacs) depends on the accurate geometrical description of the linac head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files of the flattening-filter-free (FFF) beams tallied at a plane located just upstream of the jaws. Yet, Monte Carlo simulations based on third-party tallied phase spaces are subject to limitations. In this work, an experimentally based geometry developed for the simulation of the FFF beams of the Varian TrueBeam linac is presented. The Monte Carlo geometrical model of the TrueBeam linac uses information provided by Varian that reveals large similarities between the TrueBeam machine and the Clinac 2100 downstream of the jaws. Thus, the upper part of the TrueBeam linac was modeled by introducing modifications to the Varian Clinac 2100 linac geometry. The most important of these modifications is the replacement of the standard flattening filters by ad hoc thin filters. These filters were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6 MV and 10 MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements performed with a diode detector for radiation fields ranging from 3 × 3 to 40 × 40 cm(2) at depths of maximum dose of 5 and 10 cm. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were performed for dose profiles obtained from Monte Carlo simulations using the phase-space files distributed by Varian for the TrueBeam linac as the sources of particles. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TrueBeam phase-space files for all fields and depths considered, except for
Monte Carlo simulations of the HP model (the "Ising model" of protein folding)
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Wüst, Thomas; Landau, David P.
2011-09-01
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
Monte Carlo simulations of the HP model (the "Ising model" of protein folding).
Li, Ying Wai; Wüst, Thomas; Landau, David P
2011-09-01
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
Macroion solutions in the cell model studied by field theory and Monte Carlo simulations.
Lue, Leo; Linse, Per
2011-12-14
Aqueous solutions of charged spherical macroions with variable dielectric permittivity and their associated counterions are examined within the cell model using a field theory and Monte Carlo simulations. The field theory is based on separation of fields into short- and long-wavelength terms, which are subjected to different statistical-mechanical treatments. The simulations were performed by using a new, accurate, and fast algorithm for numerical evaluation of the electrostatic polarization interaction. The field theory provides counterion distributions outside a macroion in good agreement with the simulation results over the full range from weak to strong electrostatic coupling. A low-dielectric macroion leads to a displacement of the counterions away from the macroion.
Surface-subsurface model for a dimer-dimer catalytic reaction: a Monte Carlo simulation study
NASA Astrophysics Data System (ADS)
Khan, K. M.; Albano, E. V.
2002-02-01
The surface-subsurface model for a dimer-dimer reaction of the type A2 + 2B2→2AB2 has been studied through Monte Carlo simulation via a model based on the lattice gas non-thermal Langmuir-Hinshelwood mechanism, which involves the precursor motion of the B2 molecule. The motion of precursors is considered on the surface as well as in the subsurface. The most interesting feature of this model is that it yields a steady reactive window, which is separated by continuous and discontinuous irreversible phase transitions. The phase diagram is qualitatively similar to the well known Ziff, Gulari and Barshad (ZGB) model. The width of the window depends upon the mobility of precursors. The continuous transition disappears when the mobility of the surface precursors is extended to the third-nearest neighbourhood. The dependence of production rate on partial pressure of B2 dimer is predicted by simple mathematical equations in our model.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Hobler, Gerhard; Bradley, R. Mark; Urbassek, Herbert M.
2016-05-01
Sigmund's model of spatially resolved sputtering is the underpinning of many models of nanoscale pattern formation induced by ion bombardment. It is based on three assumptions: (i) the number of sputtered atoms is proportional to the nuclear energy deposition (NED) near the surface, (ii) the NED distribution is independent of the orientation and shape of the solid surface and is identical to the one in an infinite medium, and (iii) the NED distribution in an infinite medium can be approximated by a Gaussian. We test the validity of these assumptions using Monte Carlo simulations of He, Ar, and Xe impacts on Si at energies of 2, 20, and 200 keV with incidence angles from perpendicular to grazing. We find that for the more commonly-employed beam parameters (Ar and Xe ions at 2 and 20 keV and nongrazing incidence), the Sigmund model's predictions are within a factor of 2 of the Monte Carlo results for the total sputter yield and the first two moments of the spatially resolved sputter yield. This is partly due to a compensation of errors introduced by assumptions (i) and (ii). The Sigmund model, however, does not describe the skewness of the spatially resolved sputter yield, which is almost always significant. The approximation is much poorer for He ions and/or high energies (200 keV). All three of Sigmund's assumptions break down at grazing incidence angles. In all cases, we discuss the origin of the deviations from Sigmund's model.
Iterative optimisation of Monte Carlo detector models using measurements and simulations
NASA Astrophysics Data System (ADS)
Marzocchi, O.; Leone, D.
2015-04-01
This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the "try and fail" approach typical of the prior techniques.
NASA Astrophysics Data System (ADS)
Wysong, Ingrid; Gimelshein, Sergey; Gimelshein, Natalia; McKeon, William; Esposito, Fabrizio
2012-04-01
The quantum kinetic chemical reaction model proposed by Bird for the direct simulation Monte Carlo method is based on collision kinetics with no assumed Arrhenius-related parameters. It demonstrates an excellent agreement with the best estimates for thermal reaction rates coefficients and with two-temperature nonequilibrium rate coefficients for high-temperature air reactions. This paper investigates this model further, concentrating on the non-thermal reaction cross sections as a function of collision energy, and compares its predictions with those of the earlier total collision energy model, also by Bird, as well as with available quasi-classical trajectory cross section predictions (this paper also publishes for the first time a table of these computed reaction cross sections). A rarefied hypersonic flow over a cylinder is used to examine the sensitivity of the number of exchange reactions to the differences in the two models under a strongly nonequilibrium velocity distribution.
NASA Astrophysics Data System (ADS)
Rothfischer, Ramona; Grosenick, Dirk; Macdonald, Rainer
2015-07-01
We discuss the determination of optical properties of thick scattering media from measurements of time-resolved transmittance by diffusion theory using Monte Carlo simulations as a gold standard to model photon migration. Our theoretical and experimental investigations reveal differences between calculated distributions of times of flight (DTOFs) of photons from both models which result in an overestimation of the absorption and the reduced scattering coefficient by diffusion theory which becomes larger for small scattering coefficients. By introducing a temporal shift in the DTOFs obtained with the diffusion model as additional fit parameter, the deviation in the absorption coefficient can be compensated almost completely. If the scattering medium is additionally covered by transparent layers (e.g. glass plates) the deviation between the DTOFs from both models is even larger which mainly effects the determination of the reduced scattering coefficient by diffusion theory. A temporal shift improves the accuracy of the optical properties derived by diffusion theory in this case as well.
A Monte Carlo Radiation Model for Simulating Rarefied Multiphase Plume Flows
2005-05-01
Paper 2005-0964, 2005. 9Farmer, J. T., and Howell, J. R., “ Monte Carlo Prediction of Radiative Heat Transfer in Inhomogeneous, Anisotropic...Spectroscopy and Radiative Transfer , Vol. 50, No. 5, 1993, pp. 511-530. 14Everson, J., and Nelson, H. F., “Development and Application of a Reverse Monte Carlo ...Engineering University of Michigan, Ann Arbor, MI 48109 A Monte Carlo ray trace radiation model is presented for the determination of radiative
Yuan, Jiankui; Zheng, Yiran; Wessels, Barry; Lo, Simon S; Ellis, Rodney; Machtay, Mitchell; Yao, Min
2016-12-01
A virtual source model for Monte Carlo simulations of helical TomoTherapy has been developed previously by the authors. The purpose of this work is to perform experiments in an anthropomorphic (RANDO) phantom with the same order of complexity as in clinical treatments to validate the virtual source model to be used for quality assurance secondary check on TomoTherapy patient planning dose. Helical TomoTherapy involves complex delivery pattern with irregular beam apertures and couch movement during irradiation. Monte Carlo simulation, as the most accurate dose algorithm, is desirable in radiation dosimetry. Current Monte Carlo simulations for helical TomoTherapy adopt the full Monte Carlo model, which includes detailed modeling of individual machine component, and thus, large phase space files are required at different scoring planes. As an alternative approach, we developed a virtual source model without using the large phase space files for the patient dose calculations previously. In this work, we apply the simulation system to recompute the patient doses, which were generated by the treatment planning system in an anthropomorphic phantom to mimic the real patient treatments. We performed thermoluminescence dosimeter point dose and film measurements to compare with Monte Carlo results. Thermoluminescence dosimeter measurements show that the relative difference in both Monte Carlo and treatment planning system is within 3%, with the largest difference less than 5% for both the test plans. The film measurements demonstrated 85.7% and 98.4% passing rate using the 3 mm/3% acceptance criterion for the head and neck and lung cases, respectively. Over 95% passing rate is achieved if 4 mm/4% criterion is applied. For the dose-volume histograms, very good agreement is obtained between the Monte Carlo and treatment planning system method for both cases. The experimental results demonstrate that the virtual source model Monte Carlo system can be a viable option for the
Monte-Carlo simulations of a coarse-grained model for α-oligothiophenes
NASA Astrophysics Data System (ADS)
Almutairi, Amani; Luettmer-Strathmann, Jutta
The interfacial layer of an organic semiconductor in contact with a metal electrode has important effects on the performance of thin-film devices. However, the structure of this layer is not easy to model. Oligothiophenes are small, π-conjugated molecules with applications in organic electronics that also serve as small-molecule models for polythiophenes. α-hexithiophene (6T) is a six-ring molecule, whose adsorption on noble metal surfaces has been studied extensively (see, e.g., Ref.). In this work, we develop a coarse-grained model for α-oligothiophenes. We describe the molecules as linear chains of bonded, discotic particles with Gay-Berne potential interactions between non-bonded ellipsoids. We perform Monte Carlo simulations to study the structure of isolated and adsorbed molecules
NASA Astrophysics Data System (ADS)
Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard
2016-03-01
Laser speckle imaging (LSI) enables measurement of relative blood flow in microvasculature and perfusion in tissues. To determine the impact of tissue optical properties and perfusion dynamics on speckle contrast, we developed a computational simulation of laser speckle contrast imaging. We used a discrete absorption-weighted Monte Carlo simulation to model the transport of light in tissue. We simulated optical excitation of a uniform flat light source and tracked the momentum transfer of photons as they propagated through a simulated tissue geometry. With knowledge of the probability distribution of momentum transfer occurring in various layers of the tissue, we calculated the expected laser speckle contrast arising with coherent excitation using both reflectance and transmission geometries. We simulated light transport in a single homogeneous tissue while independently varying either absorption (.001-100mm^-1), reduced scattering (.1-10mm^-1), or anisotropy (0.05-0.99) over a range of values relevant to blood and commonly imaged tissues. We observed that contrast decreased by 49% with an increase in optical scattering, and observed a 130% increase with absorption (exposure time = 1ms). We also explored how speckle contrast was affected by the depth (0-1mm) and flow speed (0-10mm/s) of a dynamic vascular inclusion. This model of speckle contrast is important to increase our understanding of how parameters such as perfusion dynamics, vessel depth, and tissue optical properties affect laser speckle imaging.
Bisaso, Kuteesa R; Mukonzo, Jackson K; Ette, Ene I
2015-11-01
The study was undertaken to develop a pharmacokinetic-pharmacodynamic model to characterize efavirenz-induced neuropsychologic impairment, given preexistent impairment, which can be used for the optimization of efavirenz therapy via Monte Carlo simulations. The modeling was performed with NONMEM 7.2. A 1-compartment pharmacokinetic model was fitted to efavirenz concentration data from 196 Ugandan patients treated with a 600-mg daily efavirenz dose. Pharmacokinetic parameters and area under the curve (AUC) were derived. Neuropsychologic evaluation of the patients was done at baseline and in week 2 of antiretroviral therapy. A discrete-time 2-state first-order Markov model was developed to describe neuropsychologic impairment. Efavirenz AUC, day 3 efavirenz trough concentration, and female sex increased the probability (P01) of neuropsychologic impairment. Efavirenz oral clearance (CL/F) increased the probability (P10) of resolution of preexistent neuropsychologic impairment. The predictive performance of the reduced (final) model, given the data, incorporating AUC on P01and CL /F on P10, showed that the model adequately characterized the neuropsychologic impairment observed with efavirenz therapy. Simulations with the developed model predicted a 7% overall reduction in neuropsychologic impairment probability at 450 mg of efavirenz. We recommend a reduction in efavirenz dose from 600 to 450 mg, because the 450-mg dose has been shown to produce sustained antiretroviral efficacy.
NASA Astrophysics Data System (ADS)
Chen, Shaohua; Xu, Yaopengxiao; Jiao, Yang
2016-12-01
Microstructure control is an important subject in solid-state sintering and plays a crucial role in determining post-sintering material properties, such as strength, toughness and density, to name but a few. The preponderance of existing numerical sintering simulations model the morphology evolution and densification process driven by surface energy minimization by either dilating the particles to be sintered or using the vacancy annihilation model. Here, we develop a novel kinetic Monte Carlo model to model morphology evolution and densification during free sintering. Specifically, we derive analytically a heterogeneous densification rate of the sintering system by considering sintering stress induced mass transport. The densification of the system is achieved by modeling the sintering stress induced mass transfer via applying effective particle displacement and grain boundary migration with an efficient two-step iterative interfacial energy minimization procedure. Coarsening is also considered in the later stages of the simulations. We show that our model can accurately capture the diffusion-induced evolution of particle morphology, including neck formation and growth, as well as realistically reproduce the overall densification of the sintered material. The computationally obtained dynamic density evolution curves for both two-particle sintering and many-particle material sintering are found to be in excellent agreement with the corresponding experimental master sintering curves. Our model can be utilized to control a variety of structural and physical properties of the sintered materials, such as the pore size and final material density.
3-D Direct Simulation Monte Carlo modeling of comet 67P/Churyumov-Gerasimenko
NASA Astrophysics Data System (ADS)
Liao, Y.; Su, C.; Finklenburg, S.; Rubin, M.; Ip, W.; Keller, H.; Knollenberg, J.; Kührt, E.; Lai, I.; Skorov, Y.; Thomas, N.; Wu, J.; Chen, Y.
2014-07-01
After deep-space hibernation, ESA's Rosetta spacecraft has been successfully woken up and obtained the first images of comet 67P /Churyumov-Gerasimenko (C-G) in March 2014. It is expected that Rosetta will rendezvous with comet 67P and start to observe the nucleus and coma of the comet in the middle of 2014. As the comet approaches the Sun, a significant increase in activity is expected. Our aim is to understand the physical processes in the coma with the help of modeling in order to interpret the resulting measurements and establish observational and data analysis strategies. DSMC (Direct Simulation Monte Carlo) [1] is a very powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow [2,3]. Comparisons between DSMC and fluid techniques have also been performed to establish the limits of these techniques [2,4]. The drawback with 3D DSMC is that it is computationally highly intensive and thus time consuming. However, the performance can be dramatically increased with parallel computing on Graphic Processor Units (GPUs) [5]. We have already studied a case with comet 9P/Tempel 1 where the Deep Impact observations were used to define the shape of the nucleus and the outflow was simulated with the DSMC approach [6,7]. For comet 67P, we intend to determine the gas flow field in the innermost coma and the surface outgassing properties from analyses of the flow field, to investigate dust acceleration by gas drag, and to compare with observations (including time variability). The boundary conditions are implemented with a nucleus shape model [8] and thermal models which are based on the surface heat-balance equation. Several different parameter sets have been investigated. The calculations have been performed using the PDSC^{++} (Parallel Direct Simulation Monte Carlo) code [9] developed by Wu and his coworkers [10-12]. Simulation tasks can be accomplished within 24
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Bishop, Martin J; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon "packets" as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct "humped" morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with "virtual-electrode" regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity.
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
Using a direct simulation Monte Carlo approach to model collisions in a buffer gas cell.
Doppelbauer, Maximilian J; Schullian, Otto; Loreau, Jerome; Vaeck, Nathalie; van der Avoird, Ad; Rennick, Christopher J; Softley, Timothy P; Heazlewood, Brianna R
2017-01-28
A direct simulation Monte Carlo (DSMC) method is applied to model collisions between He buffer gas atoms and ammonia molecules within a buffer gas cell. State-to-state cross sections, calculated as a function of the collision energy, enable the inelastic collisions between He and NH3 to be considered explicitly. The inclusion of rotational-state-changing collisions affects the translational temperature of the beam, indicating that elastic and inelastic processes should not be considered in isolation. The properties of the cold molecular beam exiting the cell are examined as a function of the cell parameters and operating conditions; the rotational and translational energy distributions are in accord with experimental measurements. The DSMC calculations show that thermalisation occurs well within the typical 10-20 mm length of many buffer gas cells, suggesting that shorter cells could be employed in many instances-yielding a higher flux of cold molecules.
Using a direct simulation Monte Carlo approach to model collisions in a buffer gas cell
NASA Astrophysics Data System (ADS)
Doppelbauer, Maximilian J.; Schullian, Otto; Loreau, Jerome; Vaeck, Nathalie; van der Avoird, Ad; Rennick, Christopher J.; Softley, Timothy P.; Heazlewood, Brianna R.
2017-01-01
A direct simulation Monte Carlo (DSMC) method is applied to model collisions between He buffer gas atoms and ammonia molecules within a buffer gas cell. State-to-state cross sections, calculated as a function of the collision energy, enable the inelastic collisions between He and NH3 to be considered explicitly. The inclusion of rotational-state-changing collisions affects the translational temperature of the beam, indicating that elastic and inelastic processes should not be considered in isolation. The properties of the cold molecular beam exiting the cell are examined as a function of the cell parameters and operating conditions; the rotational and translational energy distributions are in accord with experimental measurements. The DSMC calculations show that thermalisation occurs well within the typical 10-20 mm length of many buffer gas cells, suggesting that shorter cells could be employed in many instances—yielding a higher flux of cold molecules.
Modeling of vision loss due to vitreous hemorrhage by Monte Carlo simulation.
Al-Saeed, Tarek A; El-Zaiat, Sayed Y
2014-08-01
Vitreous hemorrhage is the leaking of blood into the vitreous humor which results from different diseases. Vitreous hemorrhage leads to vision problems ranging from mild to severe cases in which blindness occurs. Since erythrocytes are the major scatterers in blood, we are modeling light propagation in vitreous humor with erythrocytes randomly distributed in it. We consider the total medium (vitreous humor plus erythrocytes) as a turbid medium and apply Monte Carlo simulation. Then, we calculate the parameters characterizing vision loss due to vitreous hemorrhage. This work shows that the increase of the volume fraction of erythrocytes results in a decrease of the total transmittance of the vitreous body and an increase in the radius of maximum transmittance, the width of the circular strip of bright area, and the radius of the shadow area.
Modeling the tight focusing of beams in absorbing media with Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Brandes, Arnd R.; Elmaklizi, Ahmed; Akarçay, H. Günhan; Kienle, Alwin
2014-11-01
A severe drawback to the scalar Monte Carlo (MC) method is the difficulty of introducing diffraction when simulating light propagation. This hinders, for instance, the accurate modeling of beams focused through microscope objectives, where the diffraction patterns in the focal plane are of great importance in various applications. Here, we propose to overcome this issue by means of a direct extinction method. In the MC simulations, the photon paths' initial positions are sampled from probability distributions which are calculated with a modified angular spectrum of the plane waves technique. We restricted our study to the two-dimensional case, and investigated the feasibility of our approach for absorbing yet nonscattering materials. We simulated the focusing of collimated beams with uniform profiles through microscope objectives. Our results were compared with those yielded by independent simulations using the finite-difference time-domain method. Very good agreement was achieved between the results of both methods, not only for the power distributions around the focal region including diffraction patterns, but also for the distribution of the energy flow (Poynting vector).
NASA Astrophysics Data System (ADS)
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
Pfeiffer, M. Nizenkov, P. Mirza, A. Fasoulas, S.
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim
2016-06-01
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ɛ, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.
Kadoura, Ahmad; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim
2016-06-07
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
NASA Astrophysics Data System (ADS)
Parsons, Neal; Levin, Deborah A.; van Duin, Adri C. T.; Zhu, Tong
2014-12-01
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(^1Σ _g+)-N2(^1Σ _g+) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
Monte Carlo computer simulations of Venus equilibrium and global resurfacing models
NASA Technical Reports Server (NTRS)
Dawson, D. D.; Strom, R. G.; Schaber, G. G.
1992-01-01
Two models have been proposed for the resurfacing history of Venus: (1) equilibrium resurfacing and (2) global resurfacing. The equilibrium model consists of two cases: in case 1, areas less than or equal to 0.03 percent of the planet are spatially randomly resurfaced at intervals of less than or greater than 150,000 yr to produce the observed spatially random distribution of impact craters and average surface age of about 500 m.y.; and in case 2, areas greater than or equal to 10 percent of the planet are resurfaced at intervals of greater than or equal to 50 m.y. The global resurfacing model proposes that the entire planet was resurfaced about 500 m.y. ago, destroying the preexisting crater population and followed by significantly reduced volcanism and tectonism. The present crater population has accumulated since then with only 4 percent of the observed craters having been embayed by more recent lavas. To test the equilibrium resurfacing model we have run several Monte Carlo computer simulations for the two proposed cases. It is shown that the equilibrium resurfacing model is not a valid model for an explanation of the observed crater population characteristics or Venus' resurfacing history. The global resurfacing model is the most likely explanation for the characteristics of Venus' cratering record. The amount of resurfacing since that event, some 500 m.y. ago, can be estimated by a different type of Monte Carolo simulation. To date, our initial simulation has only considered the easiest case to implement. In this case, the volcanic events are randomly distributed across the entire planet and, therefore, contrary to observation, the flooded craters are also randomly distributed across the planet.
NASA Astrophysics Data System (ADS)
Mok, C. M.; Suribhatla, R. M.; Wanakule, N.; Zhang, M.
2009-12-01
A reliability-based water resources management framework has been developed by AMEC Geomatrix over the last few years to optimally manage a water supply system that serves over two million people in the northern Tampa Bay region in Florida, USA, while protecting wetland health and preventing seawater intrusion. The framework utilizes stochastic optimization techniques to account for uncertainties associated with the prediction of water demand, surface water availability, baseline groundwater levels, a non-anthropogenic reservoir water budget, and hydrological/hydrogeological properties. Except for the hydro¬geological properties, these uncertainties are partially caused by uncertainties in future rainfall patterns in the region. We present here a novel multivariate statistical model of rainfall and a methodology for generating Monte-Carlo realizations based on the statistical model. The model is intended to capture spatial-temporal characteristics of daily rainfall intensity in 172 basins in the northern Tampa Bay region and is characterized by its high dimensionality. Daily rainfall intensity in each basin is expressed as product of a binary random variable (RV) corresponding to the event of rain and a continuous RV representing the amount of rain. For the binary RVs we use a bivariate transformation technique to generate the Monte-Carlo realizations that form the basis for sequential simulation of the continuous RVs. A non-parametric Gaussian copula is used to develop the multivariate model for continuous RVs. This methodology captures key spatial and temporal characteristics of daily rainfall intensities and overcomes numerical issues posed by high-dimensionality of the Gaussian copula.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong Zhao, Xin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-03-07
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (∼2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. A small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. The results from our Monte Carlo simulations are consistent with available experimental results.
Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations
Douglass, Michael; Bezak, Eva; Penfold, Scott
2012-06-15
Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; ...
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmore » temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.« less
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun
2016-09-01
The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.
Modeling of composite latex particle morphology by off-lattice Monte Carlo simulation.
Duda, Yurko; Vázquez, Flavio
2005-02-01
Composite latex particles have shown a great range of applications such as paint resins, varnishes, water borne adhesives, impact modifiers, etc. The high-performance properties of this kind of materials may be explained in terms of a synergistical combination of two different polymers (usually a rubber and a thermoplastic). A great variety of composite latex particles with very different morphologies may be obtained by two-step emulsion polymerization processes. The formation of specific particle morphology depends on the chemical and physical nature of the monomers used during the synthesis, the process temperature, the reaction initiator, the surfactants, etc. Only a few models have been proposed to explain the appearance of the composite particle morphologies. These models have been based on the change of the interfacial energies during the synthesis. In this work, we present a new three-component model: Polymer blend (flexible and rigid chain particles) is dispersed in water by forming spherical cavities. Monte Carlo simulations of the model in two dimensions are used to determine the density distribution of chains and water molecules inside the suspended particle. This approach allows us to study the dependence of the morphology of the composite latex particles on the relative hydrophilicity and flexibility of the chain molecules as well as on their density and composition. It has been shown that our simple model is capable of reproducing the main features of the various morphologies observed in synthesis experiments.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
2016-01-01
Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${r}_{F}^{2}$\\end{document}rF2-test, two permutation F-tests (including GlobalAncova), and a rotation z-test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data
NASA Astrophysics Data System (ADS)
Males, Richard M.; Melby, Jeffrey A.
2011-12-01
The US Army Corps of Engineers has a mission to conduct a wide array of programs in the arenas of water resources, including coastal protection. Coastal projects must be evaluated according to sound economic principles, and considerations of risk assessment and sea level change must be included in the analysis. Breakwaters are typically nearshore structures designed to reduce wave action in the lee of the structure, resulting in calmer waters within the protected area, with attendant benefits in terms of usability by navigation interests, shoreline protection, reduction of wave runup and onshore flooding, and protection of navigation channels from sedimentation and wave action. A common method of breakwater construction is the rubble mound breakwater, constructed in a trapezoidal cross section with gradually increasing stone sizes from the core out. Rubble mound breakwaters are subject to degradation from storms, particularly for antiquated designs with under-sized stones insufficient to protect against intense wave energy. Storm waves dislodge the stones, resulting in lowering of crest height and associated protective capability for wave reduction. This behavior happens over a long period of time, so a lifecycle model (that can analyze the damage progression over a period of years) is appropriate. Because storms are highly variable, a model that can support risk analysis is also needed. Economic impacts are determined by the nature of the wave climate in the protected area, and by the nature of the protected assets. Monte Carlo simulation (MCS) modeling that incorporates engineering and economic impacts is a worthwhile method for handling the many complexities involved in real world problems. The Corps has developed and utilized a number of MCS models to compare project alternatives in terms of their costs and benefits. This paper describes one such model, Coastal Structure simulation (CSsim) that has been developed specifically for planning level analysis of
Monte Carlo simulation of x-ray scatter based on patient model from digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Liu, Bob; Wu, Tao; Moore, Richard H.; Kopans, Daniel B.
2006-03-01
We are developing a breast specific scatter correction method for digital beast tomosynthesis (DBT). The 3D breast volume was initially reconstructed from 15 projection images acquired from a GE prototype tomosynthesis system without correction of scatter. The voxel values were mapped to the tissue compositions using various segmentation schemes. This voxelized digital breast model was entered into a Monte Carlo package simulating the prototype tomosynthesis system. One billion photons were generated from the x-ray source for each projection in the simulation and images of scattered photons were obtained. A primary only projection image was then produced by subtracting the scatter image from the corresponding original projection image which contains contributions from the both primary photons and scatter photons. The scatter free projection images were then used to reconstruct the 3D breast using the same algorithm. Compared with the uncorrected 3D image, the x-ray attenuation coefficients represented by the scatter-corrected 3D image are closer to those derived from the measurement data.
Monte Carlo simulation of flexible trimers: from square well chains to amphiphilic primitive models.
Jiménez-Serratos, Guadalupe; Gil-Villegas, Alejandro; Vega, Carlos; Blas, Felipe J
2013-09-21
In this work, we present Monte Carlo computer simulation results of a primitive model of self-assembling system based on a flexible 3-mer chain interacting via square-well interactions. The effect of switching off the attractive interaction in an extreme sphere is analyzed, since the anisotropy in the molecular potential promotes self-organization. Before addressing studies on self-organization it is necessary to know the vapor liquid equilibrium of the system to avoid to confuse self-organization with phase separation. The range of the attractive potential of the model, λ, is kept constant and equal to 1.5σ, where σ is the diameter of a monomer sphere, while the attractive interaction in one of the monomers was gradually turned off until a pure hard body interaction was obtained. We present the vapor-liquid coexistence curves for the different models studied, their critical properties, and the comparison with the SAFT-VR theory prediction [A. Gil-Villegas, A. Galindo, P. J. Whitehead, S. J. Mills, G. Jackson, and A. N. Burgess, J. Chem. Phys. 106, 4168 (1997)]. Evidence of self-assembly for this system is discussed.
Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics
NASA Astrophysics Data System (ADS)
Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou
Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.
Monte Carlo simulation of flexible trimers: From square well chains to amphiphilic primitive models
NASA Astrophysics Data System (ADS)
Jiménez-Serratos, Guadalupe; Gil-Villegas, Alejandro; Vega, Carlos; Blas, Felipe J.
2013-09-01
In this work, we present Monte Carlo computer simulation results of a primitive model of self-assembling system based on a flexible 3-mer chain interacting via square-well interactions. The effect of switching off the attractive interaction in an extreme sphere is analyzed, since the anisotropy in the molecular potential promotes self-organization. Before addressing studies on self-organization it is necessary to know the vapor liquid equilibrium of the system to avoid to confuse self-organization with phase separation. The range of the attractive potential of the model, λ, is kept constant and equal to 1.5σ, where σ is the diameter of a monomer sphere, while the attractive interaction in one of the monomers was gradually turned off until a pure hard body interaction was obtained. We present the vapor-liquid coexistence curves for the different models studied, their critical properties, and the comparison with the SAFT-VR theory prediction [A. Gil-Villegas, A. Galindo, P. J. Whitehead, S. J. Mills, G. Jackson, and A. N. Burgess, J. Chem. Phys. 106, 4168 (1997)]. Evidence of self-assembly for this system is discussed.
NASA Astrophysics Data System (ADS)
Gereben, Orsolya; Pusztai, László
2013-10-01
The liquid structure of tetrachloroethene has been investigated on the basis of measured neutron and X-ray scattering structure factors, applying molecular dynamics simulations and reverse Monte Carlo (RMC) modeling with flexible molecules and interatomic potentials. As no complete all-atom force field parameter set could be found for this planar molecule, the closest matching all-atom Optimized Potentials for Liquid Simulations (OPLS-AA) intra-molecular parameter set was improved by equilibrium bond length and angle parameters coming from electron diffraction experiments [I. L. Karle and J. Karle, J. Chem. Phys. 20, 63 (1952)]. In addition, four different intra-molecular charge distribution sets were tried, so in total, eight different molecular dynamics simulations were performed. The best parameter set was selected by calculating the mean square difference between the calculated total structure factors and the corresponding experimental data. The best parameter set proved to be the one that uses the electron diffraction based intra-molecular parameters and the charges qC = 0.1 and qCl = -0.05. The structure was further successfully refined by applying RMC computer modeling with flexible molecules that were kept together by interatomic potentials. Correlation functions concerning the orientation of molecular axes and planes were also determined. They reveal that the molecules closest to each other exclusively prefer the parallel orientation of both the molecular axes and planes. Molecules forming the first maximum of the center-center distribution have a preference for <30° and >60° axis orientation and >60° molecular plane arrangement. A second coordination sphere at ˜11 Å and a very small third one at ˜16 Å can be found as well, without preference for any axis or plane orientation.
Canopy polarized BRDF simulation based on non-stationary Monte Carlo 3-D vector RT modeling
NASA Astrophysics Data System (ADS)
Kallel, Abdelaziz; Gastellu-Etchegorry, Jean Philippe
2017-03-01
Vector radiative transfer (VRT) has been largely used to simulate polarized reflectance of atmosphere and ocean. However it is still not properly used to describe vegetation cover polarized reflectance. In this study, we try to propose a 3-D VRT model based on a modified Monte Carlo (MC) forward ray tracing simulation to analyze vegetation canopy reflectance. Two kinds of leaf scattering are taken into account: (i) Lambertian diffuse reflectance and transmittance and (ii) specular reflection. A new method to estimate the condition on leaf orientation to produce reflection is proposed, and its probability to occur, Pl,max, is computed. It is then shown that Pl,max is low, but when reflection happens, the corresponding radiance Stokes vector, Io, is very high. Such a phenomenon dramatically increases the MC variance and yields to an irregular reflectance distribution function. For better regularization, we propose a non-stationary MC approach that simulates reflection for each sunny leaf assuming that its orientation is randomly chosen according to its angular distribution. It is shown in this case that the average canopy reflection is proportional to Pl,max ·Io which produces a smooth distribution. Two experiments are conducted: (i) assuming leaf light polarization is only due to the Fresnel reflection and (ii) the general polarization case. In the former experiment, our results confirm that in the forward direction, canopy polarizes horizontally light. In addition, they show that in inclined forward direction, diagonal polarization can be observed. In the latter experiment, polarization is produced in all orientations. It is particularly pointed out that specular polarization explains just a part of the forward polarization. Diffuse scattering polarizes light horizontally and vertically in forward and backward directions, respectively. Weak circular polarization signal is also observed near the backscattering direction. Finally, validation of the non
Microsommite: crystal chemistry, phase transitions, Ising model and Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bonaccorsi, E.; Merlino, S.; Pasero, M.; Macedonio, G.
Microsommite, ideal formula [Na4K2(SO4)] [Ca2Cl2][Si6Al6O24], is a rare feldspathoid that occurs in volcanic products of Vesuvius. It belongs to the cancrinite-davyne group of minerals, presenting an ABAB... stacking sequence of layers that contain six-membered rings of tetrahedra, with Si and Al cations regularly alternating in the tetrahedral sites. The structure was refined in space group P63 to R=0.053 by means of single-crystal X-ray diffraction data. The cell parameters are a=22.161Å=√3adav, c=5.358Å=cdav Z=3. The superstructure arises due to the long-range ordering of extra-framework ions within the channels of the structure. This ordering progressively decreases with rising temperature until it is completely lost and microsommite transforms into davyne. The order-disorder transformation has been monitored in several crystals by means of X-ray superstructure reflections and the critical parameters Tc 750°C and β 0.12 were obtained. The kinetics of the ordering process were followed at different temperatures and the activation energy was determined to be about 125kJmol-1. The continuous order-disorder phase transition in microsommite has been discussed on the basis of a two-dimensional Ising model in a triangular lattice with nn (nearest neighbours) and nnn (next-nearest neighbours) interactions. Such a model was simulated using a Monte Carlo technique. The theoretical model well matches the experimental data; two phase transitions were indicated by the simulated runs: at low temperature only one of the three sublattices begins to disorder, whereas the second transition involves all three sublattices.
Modeling of multi-band drift in nanowires using a full band Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Hathwar, Raghuraj; Saraniti, Marco; Goodnick, Stephen M.
2016-07-01
We report on a new numerical approach for multi-band drift within the context of full band Monte Carlo (FBMC) simulation and apply this to Si and InAs nanowires. The approach is based on the solution of the Krieger and Iafrate (KI) equations [J. B. Krieger and G. J. Iafrate, Phys. Rev. B 33, 5494 (1986)], which gives the probability of carriers undergoing interband transitions subject to an applied electric field. The KI equations are based on the solution of the time-dependent Schrödinger equation, and previous solutions of these equations have used Runge-Kutta (RK) methods to numerically solve the KI equations. This approach made the solution of the KI equations numerically expensive and was therefore only applied to a small part of the Brillouin zone (BZ). Here we discuss an alternate approach to the solution of the KI equations using the Magnus expansion (also known as "exponential perturbation theory"). This method is more accurate than the RK method as the solution lies on the exponential map and shares important qualitative properties with the exact solution such as the preservation of the unitary character of the time evolution operator. The solution of the KI equations is then incorporated through a modified FBMC free-flight drift routine and applied throughout the nanowire BZ. The importance of the multi-band drift model is then demonstrated for the case of Si and InAs nanowires by simulating a uniform field FBMC and analyzing the average carrier energies and carrier populations under high electric fields. Numerical simulations show that the average energy of the carriers under high electric field is significantly higher when multi-band drift is taken into consideration, due to the interband transitions allowing carriers to achieve higher energies.
Quasi-ballistic light scattering - analytical models versus Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Turcu, Ioan; Kirillin, Mikhail
2009-08-01
Approximate analytical solutions for the light scattering in a plan parallel geometry, where each scattering behaves according to a Henyey-Greenstein (HG) phase function, are presented and compared with Monte Carlo simulations. Analyzing each nth order scattered flux, the obtained angular spreading is very well described also by a HG phase function. However, the total scattered flux deviates from the HG type dependence revealing the limits of the approximations.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.
ZARE SAKHVIDI, Mohammad Javad; BARKHORDARI, Abolfazl; SALEHI, Maryam; BEHDAD, Shekoofeh; FALLAHZADEH, Hossein
2013-01-01
Applicability of two mathematical models in inhalation exposure prediction (well mixed room and near field-far field model) were validated against standard sampling method in one operation room for isoflurane. Ninety six air samples were collected from near and far field of the room and quantified by gas chromatography-flame ionization detector. Isoflurane concentration was also predicted by the models. Monte Carlo simulation was used to incorporate the role of parameters variability. The models relatively gave more conservative results than the measurements. There was no significant difference between the models and direct measurements results. There was no difference between the concentration prediction of well mixed room model and near field far field model. It suggests that the dispersion regime in room was close to well mixed situation. Direct sampling showed that the exposure in the same room for same type of operation could be up to 17 times variable which can be incorporated by Monte Carlo simulation. Mathematical models are valuable option for prediction of exposure in operation rooms. Our results also suggest that incorporating the role of parameters variability by conducting Monte Carlo simulation can enhance the strength of prediction in occupational hygiene decision making. PMID:23912206
Biscay, F; Ghoufi, A; Goujon, F; Lachet, V; Malfreyt, P
2008-11-06
The anisotropic united atoms (AUA4) model has been used for linear and branched alkanes to predict the surface tension as a function of temperature by Monte Carlo simulations. Simulations are carried out for n-alkanes ( n-C5, n-C6, n-C7, and n-C10) and for two branched C7 isomers (2,3-dimethylpentane and 2,4-dimethylpentane). Different operational expressions of the surface tension using both the thermodynamic and the mechanical definitions have been applied. The simulated surface tensions with the AUA4 model are found to be consistent within both definitions and in good agreement with experiments.
NASA Astrophysics Data System (ADS)
Wang, Yinglong; Qin, Aili; Chu, Lizhi; Deng, Zechao; Ding, Xuecheng; Guan, Li
2017-02-01
We simulated the nucleation and growth of Si nanoparticles produced by pulse laser deposition using Monte Carlo method at the molecular (microscopic) level. In the model, the mechanism and thermodynamic conditions of nucleation and growth of Si nanoparticles were described. In a real physical scale of target-substrate configuration, the model was used to analyze the average size distribution of Si nanoparticles in argon ambient gas and the calculated results are in agreement with the experimental results.
Baek, I-H; Lee, B-Y; Kang, J; Kwon, K-I
2015-04-01
Ondansetron is a potent antiemetic drug that has been commonly used to treat acute and chemotherapy-induced nausea and vomiting (CINV) in dogs. The aim of this study was to perform a pharmacokinetic analysis of ondansetron in dogs following oral administration of a single dose. A single 8-mg oral dose of ondansetron (Zofran(®) ) was administered to beagles (n = 18), and the plasma concentrations of ondansetron were measured by liquid chromatography-tandem mass spectrometry. The data were analyzed by modeling approaches using ADAPT5, and model discrimination was determined by the likelihood-ratio test. The peak plasma concentration (Cmax ) was 11.5 ± 10.0 ng/mL at 1.1 ± 0.8 h. The area under the plasma concentration vs. time curve from time zero to the last measurable concentration was 15.9 ± 14.7 ng·h/mL, and the half-life calculated from the terminal phase was 1.3 ± 0.7 h. The interindividual variability of the pharmacokinetic parameters was high (coefficient of variation > 44.1%), and the one-compartment model described the pharmacokinetics of ondansetron well. The estimated plasma concentration range of the usual empirical dose from the Monte Carlo simulation was 0.1-13.2 ng/mL. These findings will facilitate determination of the optimal dose regimen for dogs with CINV.
Zhu, Caigang; Liu, Quan
2012-01-01
We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.
A bone composition model for Monte Carlo x-ray transport simulations
Zhou Hu; Keall, Paul J.; Graves, Edward E.
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
NASA Astrophysics Data System (ADS)
Nourazar, S. S.; Jahangiri, P.; Aboutalebi, A.; Ganjaei, A. A.; Nourazar, M.; Khadem, J.
2011-06-01
The effect of new terms in the improved algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, is investigated by simulating a rarefied binary gas mixture flow inside a rotating cylinder. Dalton law for the partial pressures contributed by each species of the binary gas mixture is incorporated into our simulation using the MDSMC method and the direct simulation Monte-Carlo (DSMC) method. Moreover, the effect of the exponent of the cosine of deflection angle (α) in the inter-molecular collision models, the variable soft sphere (VSS) and the variable hard sphere (VHS), is investigated in our simulation. The improvement of the results of simulation is pronounced using the MDSMC method when compared with the results of the DSMC method. The results of simulation using the VSS model show some improvements on the result of simulation for the mixture temperature at radial distances close to the cylinder wall where the temperature reaches the maximum value when compared with the results using the VHS model.
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
Wako, H
1989-12-01
Monte Carlo simulations of a small protein, crambin, were carried out with and without hydration energy. The methodology presented here is characterized, as compared with the other similar simulations of proteins in solution, by two points: (1) protein conformations are treated in fixed geometry so that dihedral angles are independent variables rather than cartesian coordinates of atoms; and (2) instead of treating water molecules explicitly in the calculation, hydration energy is incorporated in the conformational energy function in the form of sigma giAi, where Ai is the accessible surface area of an atomic group i in a given conformation, and gi is the free energy of hydration per unit surface area of the atomic group (i.e., hydration-shell model). Reality of this model was tested by carrying out Monte Carlo simulations for the two kinds of starting conformations, native and unfolded ones, and in the two kinds of systems, in vacuo and solution. In the simulations starting from the native conformation, the differences between the mean properties in vacuo and solution simulations are not very large, but their fluctuations around the mean conformation during the simulation are relatively smaller in solution than in vacuo. On the other hand, in the simulations starting from the unfolded conformation, the molecule fluctuates much more largely in solution than in vacuo, and the effects of taking into account the hydration energy are pronounced very much. The results suggest that the method presented in this paper is useful for the simulations of proteins in solution.
Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos
2011-07-15
For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.
Huang, Chen-Hsi; Marian, Jaime
2016-10-26
We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term 'ABVI', incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found.
NASA Astrophysics Data System (ADS)
Huang, Chen-Hsi; Marian, Jaime
2016-10-01
We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term ‘ABVI’, incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found.
Dodds, Michael G; Vicini, Paolo
2004-09-01
Advances in computer hardware and the associated computer-intensive algorithms made feasible by these advances [like Markov chain Monte Carlo (MCMC) data analysis techniques] have made possible the application of hierarchical full Bayesian methods in analyzing pharmacokinetic and pharmacodynamic (PK-PD) data sets that are multivariate in nature. Pharmacokinetic data analysis in particular has been one area that has seized upon this technology to refine estimates of drug parameters from sparse data gathered in a large, highly variable population of patients. A drawback in this type of analysis is that it is difficult to quantitatively assess convergence of the Markov chains to a target distribution, and thus, it is sometimes difficult to assess the reliability of estimates gained from this procedure. Another complicating factor is that, although the application of MCMC methods to population PK-PD problems has been facilitated by new software designed for the PK-PD domain (specifically PKBUGS), experts in PK-PD may not have the necessary experience with MCMC methods to detect and understand problems with model convergence. The objective of this work is to provide an example of a set of diagnostics useful to investigators, by analyzing in detail three convergence criteria (namely the Raftery and Lewis, Geweke, and Heidelberger and Welch methods) on a simulated problem and with a rule of thumb of 10,000 chain elements in the Markov chain. We used two publicly available software packages to assess convergence of MCMC parameter estimates; the first performs Bayesian parameter estimation (PKBUGS/WinBUGS), and the second is focused on posterior analysis of estimates (BOA). The main message that seems to emerge is that accurately estimating confidence regions for the parameters of interest is more demanding than estimating the parameter means. Together, these tools provide numerical means by which an investigator can establish confidence in convergence and thus in the
Schaefer, C; Jansen, A P J
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
Schaefer, C.; Jansen, A. P. J.
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
NASA Astrophysics Data System (ADS)
Schaefer, C.; Jansen, A. P. J.
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
D. L. Kelly
2007-06-01
Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.
Monte Carlo Simulation of Plumes Spectral Emission
2005-06-07
Henyey − Greenstein scattering indicatrix SUBROUTINE Calculation of spectral (group) phase function of Monte - Carlo Simulation of Plumes...calculations; b) Computing code SRT-RTMC-NSM intended for narrow band Spectral Radiation Transfer Ray Tracing Simulation by the Monte - Carlo method with...project) Computing codes for random ( Monte - Carlo ) simulation of molecular lines with reference to a problem of radiation transfer
McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan
2013-01-01
Purpose: The purpose of this study is to adapt an equivalent source model originally developed for conventional CT Monte Carlo dose quantification to the radiation oncology context and validate its application for evaluating concomitant dose incurred by a kilovoltage (kV) cone-beam CT (CBCT) system integrated into a linear accelerator. Methods: In order to properly characterize beams from the integrated kV CBCT system, the authors have adapted a previously developed equivalent source model consisting of an equivalent spectrum module that takes into account intrinsic filtration and an equivalent filter module characterizing the added bowtie filtration. An equivalent spectrum was generated for an 80, 100, and 125 kVp beam with beam energy characterized by half-value layer measurements. An equivalent filter description was generated from bowtie profile measurements for both the full- and half-bowtie. Equivalent source models for each combination of equivalent spectrum and filter were incorporated into the Monte Carlo software package MCNPX. Monte Carlo simulations were then validated against in-phantom measurements for both the radiographic and CBCT mode of operation of the kV CBCT system. Radiographic and CBCT imaging dose was measured for a variety of protocols at various locations within a body (32 cm in diameter) and head (16 cm in diameter) CTDI phantom. The in-phantom radiographic and CBCT dose was simulated at all measurement locations and converted to absolute dose using normalization factors calculated from air scan measurements and corresponding simulations. The simulated results were compared with the physical measurements and their discrepancies were assessed quantitatively. Results: Strong agreement was observed between in-phantom simulations and measurements. For the radiographic protocols, simulations uniformly underestimated measurements by 0.54%–5.14% (mean difference = −3.07%, SD = 1.60%). For the CBCT protocols, simulations uniformly
Buyukada, Musa
2016-09-01
Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process.
[A study of brain inner tissue water molecule self-diffusion model based on Monte Carlo simulation].
Wu, Zhanxiong; Zhu, Shanan; Bin, He
2010-06-01
The study of water molecule self-diffusion process is of importance not only for getting anatomical information of brain inner tissue, but also for shedding light on the diffusion process of some medicine in brain tissue. In this paper, we summarized the self-diffusion model of water molecule in brain inner tissue, and calculated the self-diffusion coefficient based on Monte Carlo simulation under different conditions. The comparison between this result and that of Latour model showed that the two self-diffusion coefficients were getting closer when the diffusion time became longer, and that the Latour model was a long time-depended self-diffusion model.
NASA Astrophysics Data System (ADS)
Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei
2014-09-01
Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
Models for direct Monte Carlo simulation of coupled vibration-dissociation
NASA Technical Reports Server (NTRS)
Haas, Brian L.; Boyd, Iain D.
1993-01-01
A new model for reactive collisions is developed within the framework of a particle method, which simulates coupled vibration-dissociation (CVD) behavior in high-temperature gases. The fundamental principles of particle simulation methods are introduced with particular attention given to the probability functions employed to select thermal and reactive collisions. Reaction probability functions are derived which favor vibrationally excited molecules as reaction candidates. The new models derived here are used to simulate CVD behavior during thermochemical relaxation of constant-volume O2 reservoirs, as well as the dissociation incubation behavior of postshock N2 flows for comparisons with previous models and experimental data.
Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
NASA Astrophysics Data System (ADS)
Almarza, N. G.; PÈ©kalski, J.; Ciach, A.
2014-04-01
The triangular lattice model with nearest-neighbor attraction and third-neighbor repulsion, introduced by Pȩkalski, Ciach, and Almarza [J. Chem. Phys. 140, 114701 (2014)] is studied by Monte Carlo simulation. Introduction of appropriate order parameters allowed us to construct a phase diagram, where different phases with patterns made of clusters, bubbles or stripes are thermodynamically stable. We observe, in particular, two distinct lamellar phases—the less ordered one with global orientational order and the more ordered one with both orientational and translational order. Our results concern spontaneous pattern formation on solid surfaces, fluid interfaces or membranes that is driven by competing interactions between adsorbing particles or molecules.
Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
NASA Astrophysics Data System (ADS)
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
NASA Astrophysics Data System (ADS)
Castells, Victoria; Van Tassel, Paul R.
2005-02-01
Proteins often undergo changes in internal conformation upon interacting with a surface. We investigate the thermodynamics of surface induced conformational change in a lattice model protein using a multicanonical Monte Carlo method. The protein is a linear heteropolymer of 27 segments (of types A and B) confined to a cubic lattice. The segmental order and nearest neighbor contact energies are chosen to yield, in the absence of an adsorbing surface, a unique 3×3×3 folded structure. The surface is a plane of sites interacting either equally with A and B segments (equal affinity surface) or more strongly with the A segments (A affinity surface). We use a multicanonical Monte Carlo algorithm, with configuration bias and jump walking moves, featuring an iteratively updated sampling function that converges to the reciprocal of the density of states 1/Ω(E), E being the potential energy. We find inflection points in the configurational entropy, S(E)=klnΩ(E), for all but a strongly adsorbing equal affinity surface, indicating the presence of free energy barriers to transition. When protein-surface interactions are weak, the free energy profiles F(E)=E-TS(E) qualitatively resemble those of a protein in the absence of a surface: a free energy barrier separates a folded, lowest energy state from globular, higher energy states. The surface acts in this case to stabilize the globular states relative to the folded state. When the protein surface interactions are stronger, the situation differs markedly: the folded state no longer occurs at the lowest energy and free energy barriers may be absent altogether.
NASA Astrophysics Data System (ADS)
Arifin, P.; Goldys, E.; Tansley, T. L.
1995-08-01
We present a method of simulating the electron transport in low-temperature-grown GaAs by the Monte Carlo method. Low-temperature-grown GaAs contains microscopic inclusions of As and these inhomogeneities render impossible the standard Monte Carlo mobility simulations. Our method overcomes this difficulty and allows the quantitative prediction of electron transport on the basis of principal microscopic material parameters, including the impurity and the precipitate concentrations and the precipitate size. The adopted approach involves simulations of a single electron trajectory in real space, while the influence of As precipitates on the GaAs matrix is treated in the framework of a Schottky-barrier model. The validity of this approach is verified by evaluation of the drift velocity in homogeneous GaAs where excellent agreement with other workers' results is reached. The drift velocity as a function of electric field in low-temperature-grown GaAs is calculated for a range of As precipitate concentrations. Effect of compensation ratio on drift velocity characteristics is also investigated. It is found that the drift velocity is reduced and the electric field at which the onset of the negative differential mobility occurs increases as the precipitate concentration increases. Both these effects are related to the reduced electron mean free path in the presence of precipitates. Additionally, comparatively high low-field electron mobilities in this material are theoretically explained.
Al-Subeihi, Ala A A; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; van Bladeren, Peter J; Rietjens, Ivonne M C M; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1'-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1'-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1'-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1'-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1'-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1'-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment.
NASA Astrophysics Data System (ADS)
Matsumoto, Hiroaki
2002-12-01
The variable sphere (VS) molecular model for the Monte Carlo simulation of rarefied gas flow is introduced to provide consistency for diffusion and viscosity cross-sections with those of any realistic intermolecular potential. It is then applied to the inverse power law (IPL) and Lennard-Jones (LJ) potentials. The VS model has a much simpler scattering law than either the variable hard sphere (VHS) or variable soft sphere (VSS) models; also, it has almost the same computational efficiency as the VHS and VSS models. A simulation of velocity relaxation in a homogeneous space and two comparative simulations of molecular diffusion in a homogeneous heat-bath gas and normal shock wave structure in a monatomic gas are made to examine VS model validity. The relaxation to a Maxwellian distribution function and equipartition between all degrees of freedom are well established; good agreement is shown in the molecular diffusion and shock wave structure between the VS model and the IPL and LJ potentials. The VS model is combined with the statistical inelastic cross-section (SICS) model and applied to simulation of translational and rotational energy relaxation in a homogeneous space. The VS model shows the relaxation of Maxwellian and Boltzmann distribution functions and equipartition between all degrees of freedom. Comparative calculation between the VS model with the SICS (VS-SICS) model and the VSS model with the SICS (VSS-SICS) model is made for rotational relaxation in a nitrogen normal shock wave. Good agreement is shown in the shock wave structure and rotational energy distribution function between the VS-SICS model and the VSS-SICS model. This study demonstrates that diffusion and viscosity cross-sections, rather than the scattering law of each molecular collision, affect macroscopic transport phenomena.
NASA Astrophysics Data System (ADS)
Cheng, Guoxin; Liu, Lie
2011-06-01
Based on Vaughan's empirical formula of secondary emission yield and the assumption of mutual exclusion of each type of secondary electron, a mathematically self-consistent secondary emission model is proposed. It identifies each generated secondary electron as either elastic reflected, rediffused, or true secondary, hence, it allows the use of distinct emission energy and angular distributions of each type of electron. Monte Carlo modeling of the developed model is presented, and second-order algorithms for particle collection and ejection at the secondary-emission wall are developed in order to incorporate the secondary electron emission process in the standard leap-frog integrator. The accuracy of these algorithms is analyzed for general fields and is confirmed by comparing the numerically computed values with the exact solution under a homogeneous magnetic field. In particular, the phenomenon of multipactor electron discharge on a dielectric is simulated to verify the usefulness of the model developed in this paper.
Monte Carlo Simulation of an Arc Therapy Treatment by Means of a PC Distribution Model
NASA Astrophysics Data System (ADS)
Leal, A.; Sánchez-Doblado, F.; Perucha, M.; Rincón, M.; Arráns, R.; Bernal, C.; Carrasco, E.
It would be always desirable to have an independent assessment of a planning system. Monte Carlo (MC) offers an accurate way of checking dose distribution in non homogeneous volumes. Nevertheless, its main drawback is the long processing times needed.
Saloranta, Tuomo M; Armitage, James M; Haario, Heikki; Naes, Kristoffer; Cousins, Ian T; Barton, David N
2008-01-01
Multimedia environmental fate models are useful tools to investigate the long-term impacts of remediation measures designed to alleviate potential ecological and human health concerns in contaminated areas. Estimating and communicating the uncertainties associated with the model simulations is a critical task for demonstrating the transparency and reliability of the results. The Extended Fourier Amplitude Sensitivity Test(Extended FAST) method for sensitivity analysis and Bayesian Markov chain Monte Carlo (MCMC) method for uncertainty analysis and model calibration have several advantages over methods typically applied for multimedia environmental fate models. Most importantly, the simulation results and their uncertainties can be anchored to the available observations and their uncertainties. We apply these techniques for simulating the historical fate of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the Grenland fjords, Norway, and for predicting the effects of different contaminated sediment remediation (capping) scenarios on the future levels of PCDD/Fs in cod and crab therein. The remediation scenario simulations show that a significant remediation effect can first be seen when significant portions of the contaminated sediment areas are cleaned up, and that increase in capping area leads to both earlier achievement of good fjord status and narrower uncertainty in the predicted timing for this.
Morton, April M; McManamay, Ryan A; Nagle, Nicholas N; Piburn, Jesse O; Stewart, Robert N; Surendran Nair, Sujithkumar
2016-01-01
Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
MBR Monte Carlo Simulation in PYTHIA8
NASA Astrophysics Data System (ADS)
Ciesielski, R.
We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.
Assessment of high-fidelity collision models in the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Weaver, Andrew B.
Advances in computer technology over the decades has allowed for more complex physics to be modeled in the DSMC method. Beginning with the first paper on DSMC in 1963, 30,000 collision events per hour were simulated using a simple hard sphere model. Today, more than 10 billion collision events can be simulated per hour for the same problem. Many new and more physically realistic collision models such as the Lennard-Jones potential and the forced harmonic oscillator model have been introduced into DSMC. However, the fact that computer resources are more readily available and higher-fidelity models have been developed does not necessitate their usage. It is important to understand how such high-fidelity models affect the output quantities of interest in engineering applications. The effect of elastic and inelastic collision models on compressible Couette flow, ground-state atomic oxygen transport properties, and normal shock waves have therefore been investigated. Recommendations for variable soft sphere and Lennard-Jones model parameters are made based on a critical review of recent ab-initio calculations and experimental measurements of transport properties.
Liu, Changzheng; Lin, Zhenhong
2016-12-08
Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market share variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.
Liu, Changzheng; Oak Ridge National Lab.; Lin, Zhenhong; ...
2016-12-08
Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market sharemore » variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.« less
NASA Astrophysics Data System (ADS)
Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Timmerman, Robert; Randall, Marcus; DesRosiers, Paul
2002-06-01
The Monte Carlo code PENELOPE has been used to simulate photon flux from the Leksell Gamma Knife®, a precision method for treating intracranial lesions. Radiation from a single 60Co assembly traversing the collimator system was simulated, and phase space distributions at the output surface of the helmet for photons and electrons were calculated. The characteristics describing the emitted final beam were used to build a two-stage Monte Carlo simulation of irradiation of a target. A dose field inside a standard spherical polystyrene phantom, usually used for Gamma Knife® dosimetry, has been computed and compared with experimental results, with calculations performed by other authors with the use of the EGS4 Monte Carlo code, and data provided by the treatment planning system Gamma Plan®. Good agreement was found between these data and results of simulations in homogeneous media. Owing to this established accuracy, PENELOPE is suitable for simulating problems relevant to stereotactic radiosurgery.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project
Zhong, Xiewei; Wen, Xiang; Zhu, Dan
2014-01-27
Fiber reflectance spectroscopy is a non-invasive method for diagnosing skin diseases or evaluating aesthetic efficacy, but it is dependent on the inverse model validity. In this work, a lookup-table-based inverse model is developed using two-layered Monte Carlo simulations in order to extract the physiological and optical properties of skin. The melanin volume fraction and blood oxygen parameters are extracted from fiber reflectance spectra of in vivo human skin. The former indicates good coincidence with a commercial skin-melanin probe, and the latter (based on forearm venous occlusion and ischemia, and hot compress experiment) shows that the measurements are in agreement with physiological changes. These results verify the potential of this spectroscopy technique for evaluating the physiological characteristics of human skin.
Ye, Hong-Zhou; Sun, Chong; Jiang, Hong
2015-03-14
Materials with spin-crossover (SCO) properties hold great potential in information storage and therefore have received a lot of concerns in recent decades. The hysteresis phenomena accompanying SCO are attributed to the intermolecular cooperativity whose underlying mechanism may have a vibronic origin. In this work, a new vibronic Ising-like model in which the elastic coupling between SCO centers is included by considering harmonic stretching and bending (SAB) interactions is proposed and solved by Monte Carlo (MC) simulations. The key parameters in the new model, k1 and k2, corresponding to the elastic constant of the stretching and bending mode, respectively, can be directly related to the macroscopic bulk and shear modulus of the material of study, which can be readily estimated either based on experimental measurements or first-principles calculations. Using realistic parameters estimated based on density-functional theory calculations of a specific polymeric coordination SCO compound, [Fe(pz)Pt(CN)4]·2H2O (pz = pyrazine), temperature-induced hysteresis and pressure effects on SCO phenomena are simulated successfully. Our MC simulations shed light on the role of the vibronic couplings in the thermal hysteresis of SCO systems, and also point out the limitations of highly simplified Ising-like models for quantitative description of real SCO systems, which will be of great value for the development of more realistic SCO models.
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S
2015-12-07
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Hanford, Amanda D; O'Connor, Patrick D; Anderson, James B; Long, Lyle N
2008-06-01
In the current study, real gas effects in the propagation of sound waves are simulated using the direct simulation Monte Carlo method for a wide range of frequencies. This particle method allows for treatment of acoustic phenomena at high Knudsen numbers, corresponding to low densities and a high ratio of the molecular mean free path to wavelength. Different methods to model the internal degrees of freedom of diatomic molecules and the exchange of translational, rotational and vibrational energies in collisions are employed in the current simulations of a diatomic gas. One of these methods is the fully classical rigid-rotor/harmonic-oscillator model for rotation and vibration. A second method takes into account the discrete quantum energy levels for vibration with the closely spaced rotational levels classically treated. This method gives a more realistic representation of the internal structure of diatomic and polyatomic molecules. Applications of these methods are investigated in diatomic nitrogen gas in order to study the propagation of sound and its attenuation and dispersion along with their dependence on temperature. With the direct simulation method, significant deviations from continuum predictions are also observed for high Knudsen number flows.
Cluster hybrid Monte Carlo simulation algorithms.
Plascak, J A; Ferrenberg, Alan M; Landau, D P
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Cluster hybrid Monte Carlo simulation algorithms
NASA Astrophysics Data System (ADS)
Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
NASA Astrophysics Data System (ADS)
Beyerlein, Irene Jane
Many next generation, structural composites are likely to be engineered from stiff fibers embedded in ceramic, metallic, or polymeric matrices. Ironically, complexity in composite failure response, rendering them superior to traditional materials, also makes them difficult to characterize for high reliability design. Challenges lie in modeling the interacting, randomly evolving micromechanical damage, such as fiber break nucleation and coalescence, and in the fact that strength, lifetime, and failure mode vary substantially between otherwise identical specimens. My thesis research involves developing (i) computational, micromechanical stress transfer models around multiple fiber breaks in fiber composites, (ii) Monte Carlo simulation models to reproduce their failure process, and (iii) interpretative probability models. In Chapter 1, a Monte Carlo model is developed to study the effects of fiber strength statistics on the fracture process and strength distribution of unnotched and notched N elastic composite laminae. The simulation model couples a micromechanical stress analysis, called break influence superposition, and Weibull fiber strengths, wherein fiber strength varies negligibly along fiber length. Examination of various statistical aspects of composite failure reveals mechanisms responsible for flaw intolerance in the short notch regime and for toughness in the long notch regime. Probability models and large N approximations are developed in Chapter 2 to model the effects of variation in fiber strength on statistical composite fracture response. Based on the probabilities of simple sequences of failure events, probability models for crack and distributed cluster growth and fracture resistance are developed. Comparisons with simulation results show that these models and approximations successfully predicted the unnotched and notched composite strength distributions and that fracture toughness grows slowly as (1nN)sp{1/gamma}, where gamma is the fiber Weibull
NASA Astrophysics Data System (ADS)
Giura, Stefano; Schoen, Martin
2014-08-01
We consider the phase behavior of a simple model of a liquid crystal by means of modified mean-field density-functional theory (MMF DFT) and Monte Carlo simulations in the grand canonical ensemble (GCEMC). The pairwise additive interactions between liquid-crystal molecules are modeled via a Lennard-Jones potential in which the attractive contribution depends on the orientation of the molecules. We derive the form of this orientation dependence through an expansion in terms of rotational invariants. Our MMF DFT predicts two topologically different phase diagrams. At weak to intermediate coupling of the orientation dependent attraction, there is a discontinuous isotropic-nematic liquid-liquid phase transition in addition to the gas-isotropic liquid one. In the limit of strong coupling, the gas-isotropic liquid critical point is suppressed in favor of a fluid- (gas- or isotropic-) nematic phase transition which is always discontinuous. By considering three representative isotherms in parallel GCEMC simulations, we confirm the general topology of the phase diagram predicted by MMF DFT at intermediate coupling strength. From the combined MMF DFT-GCEMC approach, we conclude that the isotropic-nematic phase transition is very weakly first order, thus confirming earlier computer simulation results for the same model [see M. Greschek and M. Schoen, Phys. Rev. E 83, 011704 (2011), 10.1103/PhysRevE.83.011704].
NASA Astrophysics Data System (ADS)
Llano-Restrepo, Mario Andres
A study of concentrated aqueous alkali halide solutions is made at the molecular level, through modeling and computer simulation of their structural and thermodynamic properties. It is found that the HNC approximation is the best integral equation theory to predict such properties within the framework of the primitive model (PM). The intrinsic limitations of the PM in describing ionic association and hydration effects are addressed and discussed in order to emphasize the need for explicitly including the water molecules in the treatment of aqueous electrolyte solutions by means of a civilized model (CM). As a step toward developing a CM as simple as possible, it is shown that a modified version of the SPC model of liquid water in which the Lennard-Jones interaction between intermolecular oxygen sites is replaced by a hard core interaction, is still successful enough to predict the degree of hydrogen bonding of real water. A simple civilized model (SCM) (in which the ions are treated as hard spheres interacting through Coulombic potentials and the water molecules are simulated using the simplified SPC model) is introduced in order to study the changes in the structural features of various aqueous alkali halide solutions upon varying both the concentration and the size of the ions. Both cations and anions are found to be solvated by the water molecules at expense of a breakdown in the hydrogen-bonded water network. Hydration numbers are reported for the first time for NaBr and KBr, and the first simulation -based estimates for LiBr, NaI and KI are also obtained. In several cases, values of the hydration numbers based on the SCM are found to be in excellent agreement with available experimental results obtained from x-ray diffraction measurements. Finally, it is shown that a neoprimitive model (NPM) can be developed by incorporating some of the structural features seen in the SCM into the short-range part of the PM interionic potential via a shielded square well whose
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Development of Monte Carlo Capability for Orion Parachute Simulations
NASA Technical Reports Server (NTRS)
Moore, James W.
2011-01-01
Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in
Crystal nuclei in melts: a Monte Carlo simulation of a model for attractive colloids
NASA Astrophysics Data System (ADS)
Statt, Antonia; Virnau, Peter; Binder, Kurt
2015-09-01
As a model for a suspension of hard-sphere-like colloidal particles where small non-adsorbing dissolved polymers create a depletion attraction, we introduce an effective colloid-colloid potential closely related to the Asakura-Oosawa model, but that does not have any discontinuities. In simulations, this model straightforwardly allows the calculation of the pressure from the virial formula, and the phase transition in the bulk from the liquid to crystalline solid can be accurately located from a study where a stable coexistence of a crystalline slab with a surrounding liquid phase occurs. For this model, crystalline nuclei surrounded by fluid are studied both by identifying the crystal-fluid interface on the particle level (using suitable bond orientational order parameters to distinguish the phases) and by 'thermodynamic' means, i.e. the latter method amounts to compute the enhancement of chemical potential and pressure relative to their coexistence values. We show that the chemical potential can be obtained from simulating thick films, where one wall with a rather long-range repulsion is present, since near this wall, the Widom particle insertion method works, exploiting the fact that the chemical potential in the system is homogeneous. Finally, the surface excess free energy of the nucleus is obtained, for a wide range of nuclei volumes. From this method, it is established that classical nucleation theory works, showing that for the present model, the anisotropy of the interface excess free energy of crystals and their resulting non-spherical shape has only a very small effect on the barrier.
Sutton, Steven C; Hu, Mingxiu
2006-05-05
Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.
McCreddin, A; Alam, M S; McNabola, A
2015-01-01
An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies.
NASA Astrophysics Data System (ADS)
Wüst, Thomas; Hulliger, Jürg
2005-02-01
A layer-by-layer growth model is presented for the theoretical investigation of growth-induced polarity formation in solid solutions H1-XGX of polar (H) and nonpolar (G) molecules (X: molar fraction of G molecules in the solid, 0
Lee, C; Lin, H; Chao, T; Hsiao, I; Chuang, K
2015-06-15
Purpose: Predicted PET images on the basis of analytical filtering approach for proton range verification has been successful developed and validated using FLUKA Monte Carlo (MC) codes and phantom measurements. The purpose of the study is to validate the effectiveness of analytical filtering model for proton range verification on GATE/GEANT4 Monte Carlo simulation codes. Methods: In this study, we performed two experiments for validation of predicted β+-isotope by the analytical model with GATE/GEANT4 simulations. The first experiments to evaluate the accuracy of predicting β+-yields as a function of irradiated proton energies. In second experiment, we simulate homogeneous phantoms of different materials irradiated by a mono-energetic pencil-like proton beam. The results of filtered β+-yields distributions by the analytical model is compared with those of MC simulated β+-yields in proximal and distal fall-off ranges. Results: The results investigate the distribution between filtered β+-yields and MC simulated β+-yields distribution in different conditions. First, we found that the analytical filtering can be applied over the whole range of the therapeutic energies. Second, the range difference between filtered β+-yields and MC simulated β+-yields at the distal fall-off region are within 1.5mm for all materials used. The findings validated the usefulness of analytical filtering model on range verification of proton therapy on GATE Monte Carlo simulations. In addition, there is a larger discrepancy between filtered prediction and MC simulated β+-yields using GATE code, especially in proximal region. This discrepancy might Result from the absence of wellestablished theoretical models for predicting the nuclear interactions. Conclusion: Despite the fact that large discrepancies of the distributions between MC-simulated and predicted β+-yields were observed, the study prove the effectiveness of analytical filtering model for proton range verification using
NASA Astrophysics Data System (ADS)
Sinha, Indrajit; Mukherjee, Ashim K.
2014-03-01
The oxidation of CO on Pt-group metal surfaces has attracted widespread attention since a long time due to its interesting oscillatory kinetics and spatiotemporal behavior. The use of STM in conjunction with other experimental data has confirmed the validity of the surface reconstruction (SR) model under low pressure and the more recent surface oxide (SO) model which is possible under sub-atmospheric pressure conditions [1]. In the SR model the surface is periodically reconstructed below a certain low critical CO-coverage and this reconstruction is lifted above a second, higher critical CO-coverage. Alternatively the SO model proposes periodic switching between a low-reactivity metallic surface and a high-reactivity oxide surface. Here we present an overview of our recent kinetic Monte Carlo (KMC) simulation studies on the oscillatory kinetics of surface catalyzed CO oxidation. Different modifications of the lattice gas Ziff-Gulari-Barshad (ZGB) model have been utilized or proposed for this purpose. First we present the effect of desorption on the ZGB reactive to poisoned irreversible phase transition in the SR model. Next we discuss our recent research on KMC simulation of the SO model. The ZGB framework is utilized to propose a new model incorporating not only the standard Langmuir-Hinshelwood (LH) mechanism, but also introducing the Mars-van Krevelen (MvK) mechanism for the surface oxide phase [5]. Phase diagrams, which are plots between long time averages of various oscillating quantities against the normalized CO pressure, show two or three transitions depending on the CO coverage critical threshold (CT) value beyond which all adsorbed oxygen atoms are converted to surface oxide.
Ethayaraja, M; Dutta, Kanchan; Bandyopadhyaya, Rajdip
2006-08-24
Modeling the nanoparticle formation mechanism in water-in-oil microemulsion, a self-assembled colloidal template, has been addressed in this paper by two formalisms: the deterministic population balance equation (PBE) model and stochastic Monte Carlo (MC) simulation. These are based on time-scale analysis of elementary events consisting of reactant mass transport, solid solubilization, reaction, coalescence-exchange of drops, and finally nucleation and growth of nanoparticles. For the first time in such a PBE model, realistic binomial redistribution of molecules in the daughter drops (after coalescence-exchange of two drops) has been explicitly implemented. This has resulted in a very general model, applicable to processes with arbitrary relative rates of coalescence-exchange and nucleation. Both the deterministic and stochastic routes could account for the inherent randomness in the elementary events and successfully explained temporal evolution of mean and variance of nanoparticle size distribution. This has been illustrated by comparison with different yet broadly similar experiments, operating either under coalescence (lime carbonation to make CaCO(3) nanoparticles) or nucleation (hydride hydrolysis to make Ca(OH)(2) nanoparticles) dominant regimes. Our calculations are robust in being able to predict for very diverse process operation times: up to 26 min and 5 h for carbonation and hydrolysis experiments, respectively. Model predictions show that an increase in the external reactant addition rate to microemulsion solution is beneficial under certain general conditions, increasing the nanoparticle production rate significantly without any undesirable and perceptible change in particle size.
Litaize, O.; Serot, O.
2010-11-15
A Monte Carlo simulation of the fission fragment deexcitation process was developed in order to analyze and predict postfission-related nuclear data which are of crucial importance for basic and applied nuclear physics. The basic ideas of such a simulation were already developed in the past. In the present work, a refined model is proposed in order to make a reliable description of the distributions related to fission fragments as well as to prompt neutron and {gamma} energies and multiplicities. This refined model is mainly based on a mass-dependent temperature ratio law used for the initial excitation energy partition of the fission fragments and a spin-dependent excitation energy limit for neutron emission. These phenomenological improvements allow us to reproduce with a good agreement the {sup 252}Cf(sf) experimental data on prompt fission neutron multiplicity {nu}(A), {nu}(TKE), the neutron multiplicity distribution P({nu}), as well as their energy spectra N(E), and lastly the energy release in fission.
Li, Jun; Calo, Victor M.
2013-09-15
We present a single-particle Lennard–Jones (L-J) model for CO{sub 2} and N{sub 2}. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO{sub 2} and N{sub 2} agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH{sub 4}, CO{sub 2} and N{sub 2} are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available.
Zhou, X. W.; Yang, N. Y. C.
2014-03-14
Electronic properties of semiconductor devices are sensitive to defects such as second phase precipitates, grain sizes, and voids. These defects can evolve over time especially under oxidation environments and it is therefore important to understand the resulting aging behavior in order for the reliable applications of devices. In this paper, we propose a kinetic Monte Carlo framework capable of simultaneous simulation of the evolution of second phases, precipitates, grain sizes, and voids in complicated systems involving many species including oxygen. This kinetic Monte Carlo model calculates the energy barriers of various events based directly on the experimental data. As a first step of our model implementation, we incorporate the second phase formation module in the parallel kinetic Monte Carlo codes SPPARKS. Selected aging simulations are performed to examine the formation of second phase precipitates at the eletroplated Au/Bi{sub 2}Te{sub 3} interface under oxygen and oxygen-free environments, and the results are compared with the corresponding experiments.
Duda, Yurko; Romero-Martínez, Ascención; Orea, Pedro
2007-06-14
The liquid-vapor phase diagram and surface tension for hard-core Yukawa potential with 4
NASA Astrophysics Data System (ADS)
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Titt, U; Sahoo, N; Ding, X; Zheng, Y; Newhauser, W D; Zhu, X R; Polf, J C; Gillin, M T; Mohan, R
2008-08-21
In recent years, the Monte Carlo method has been used in a large number of research studies in radiation therapy. For applications such as treatment planning, it is essential to validate the dosimetric accuracy of the Monte Carlo simulations in heterogeneous media. The AAPM Report no 105 addresses issues concerning clinical implementation of Monte Carlo based treatment planning for photon and electron beams, however for proton-therapy planning, such guidance is not yet available. Here we present the results of our validation of the Monte Carlo model of the double scattering system used at our Proton Therapy Center in Houston. In this study, we compared Monte Carlo simulated depth doses and lateral profiles to measured data for a magnitude of beam parameters. We varied simulated proton energies and widths of the spread-out Bragg peaks, and compared them to measurements obtained during the commissioning phase of the Proton Therapy Center in Houston. Of 191 simulated data sets, 189 agreed with measured data sets to within 3% of the maximum dose difference and within 3 mm of the maximum range or penumbra size difference. The two simulated data sets that did not agree with the measured data sets were in the distal falloff of the measured dose distribution, where large dose gradients potentially produce large differences on the basis of minute changes in the beam steering. Hence, the Monte Carlo models of medium- and large-size double scattering proton-therapy nozzles were valid for proton beams in the 100 MeV-250 MeV interval.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z Y; Kremer, Kurt; Daoulas, Kostas Ch
2016-11-14
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
NASA Astrophysics Data System (ADS)
Greco, Cristina; Jiang, Ying; Chen, Jeff Z. Y.; Kremer, Kurt; Daoulas, Kostas Ch.
2016-11-01
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
da Silva, Roberto; Alves, Nelson; Drugowich de Felício, Jose Roberto
2013-01-01
In this work, we study the critical behavior of second-order points, specifically the Lifshitz point (LP) of a three-dimensional Ising model with axial competing interactions [the axial-next-nearest-neighbor Ising (ANNNI) model], using time-dependent Monte Carlo simulations. We use a recently developed technique that helps us localize the critical temperature corresponding to the best power law for magnetization decay over time:
Patrone, Paul N; Einstein, T L; Margetis, Dionisios
2010-12-01
We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Kolmann, Stephen J.; D'Arcy, Jordan H.; Crittenden, Deborah L.; Jordan, Meredith J. T.
2015-11-01
Finite temperature quantum and anharmonic effects are studied in H2-Li+-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li+-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li+-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol-1, respectively.
Geometrical Monte Carlo simulation of atmospheric turbulence
NASA Astrophysics Data System (ADS)
Yuksel, Demet; Yuksel, Heba
2013-09-01
Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These Monte Carlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The Monte Carlo analysis will be compared with the predictions of wave theory.
Shimada, M; Yamada, Y; Itoh, M; Yatagai, T
2001-09-01
Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Shimada, M.; Yamada, Y.; Itoh, M.; Yatagai, T.
2001-09-01
Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
NASA Astrophysics Data System (ADS)
Xie, Huamu; Ben-Zvi, Ilan; Rao, Triveni; Xin, Tianmu; Wang, Erdong
2016-10-01
High-average-current, high-brightness electron sources have important applications, such as in high-repetition-rate free-electron lasers, or in the electron cooling of hadrons. Bialkali photocathodes are promising high-quantum-efficiency (QE) cathode materials, while superconducting rf (SRF) electron guns offer continuous-mode operation at high acceleration, as is needed for high-brightness electron sources. Thus, we must have a comprehensive understanding of the performance of bialkali photocathode at cryogenic temperatures when they are to be used in SRF guns. To remove the heat produced by the radio-frequency field in these guns, the cathode should be cooled to cryogenic temperatures. We recorded an 80% reduction of the QE upon cooling the K2CsSb cathode from room temperature down to the temperature of liquid nitrogen in Brookhaven National Laboratory (BNL)'s 704 MHz SRF gun. We conducted several experiments to identify the underlying mechanism in this reduction. The change in the spectral response of the bialkali photocathode, when cooled from room temperature (300 K) to 166 K, suggests that a change in the ionization energy (defined as the energy gap from the top of the valence band to vacuum level) is the main reason for this reduction. We developed an analytical model of the process, based on Spicer's three-step model. The change in ionization energy, with falling temperature, gives a simplified description of the QE's temperature dependence. We also developed a 2D Monte Carlo code to simulate photoemission that accounts for the wavelength-dependent photon absorption in the first step, the scattering and diffusion in the second step, and the momentum conservation in the emission step. From this simulation, we established a correlation between ionization energy and reduction in the QE. The simulation yielded results comparable to those from the analytical model. The simulation offers us additional capabilities such as calculation of the intrinsic emittance
Monte Carlo simulation of intercalated carbon nanotubes.
Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter
2007-01-01
Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms.
Monte Carlo simulation of coarsening in a model of submonolayer epitaxial growth
NASA Astrophysics Data System (ADS)
Lam, Pui-Man; Bayayoko, Diola; Hu, Xiao-Yang
1999-06-01
We investigate the effect of coarsening in the Clarke-Vvedensky model of thin film growth, primarily as a model of statistical physics far from equilibrium. We deposit adatoms on the substrate until a fixed coverage is reached. We then stop the deposition and measure the subsequent change in the distribution of the island sizes. We find that for large flux, coarsening in this model is consistent with the Lifshitz-Slyozov law ξ˜ t1/3, where ξ is the characteristic linear dimension and t is the time in the coarsening process. We have also calculated the stationary states of the island size distributions at long times and find that these distribution functions are independent of initial conditions. They obey scaling with the universal scaling function agreeing with that obtained by Kandel using the Smolochowsky equation in a cluster coalescence model.
Monte Carlo Simulation of Endlinking Oligomers
NASA Technical Reports Server (NTRS)
Hinkley, Jeffrey A.; Young, Jennifer A.
1998-01-01
This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander; Ben-Naim, Arieh
2009-11-01
We have carried out Monte Carlo simulation on the primitive one dimensional model for water described earlier [A. Ben-Naim, J. Chem. Phys. 128, 024506 (2008)]. We show that by taking into account second nearest neighbor interactions, one can obtain the characteristic anomalous solvation thermodynamic quantities of inert solutes in water. This model clearly demonstrates the molecular origin of the large negative entropy of solvation of an inert solute in water.
Runov, A.M.; Kasilov, S.V.; Helander, P.
2015-11-01
A kinetic Monte Carlo model suited for self-consistent transport studies is proposed and tested. The Monte Carlo collision operator is based on a widely used model of Coulomb scattering by a drifting Maxwellian and a new algorithm enforcing the momentum and energy conservation laws. The difference to other approaches consists in a specific procedure of calculating the background Maxwellian parameters, which does not require ensemble averaging and, therefore, allows for the use of single-particle algorithms. This possibility is useful in transport balance (steady state) problems with a phenomenological diffusive ansatz for the turbulent transport, because it allows a direct use of variance reduction methods well suited for single particle algorithms. In addition, a method for the self-consistent calculation of the electric field is discussed. Results of testing of the new collision operator using a set of 1D examples, and preliminary results of 2D modelling in realistic tokamak geometry, are presented.
Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses
ALAM,TODD M.
1999-12-21
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
NASA Astrophysics Data System (ADS)
Jalayer, Fatemeh; Ebrahimian, Hossein
2014-05-01
Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Monte Carlo simulation of chromatin stretching
NASA Astrophysics Data System (ADS)
Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg
2006-04-01
We present Monte Carlo (MC) simulations of the stretching of a single 30nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90° to 130° increases the stiffness significantly. An increase in the opening angle from 22° to 34° leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending.
Monte Carlo Simulation of Counting Experiments.
ERIC Educational Resources Information Center
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Mode, Charles J; Gallop, Robert J
2008-02-01
A case has made for the use of Monte Carlo simulation methods when the incorporation of mutation and natural selection into Wright-Fisher gametic sampling models renders then intractable from the standpoint of classical mathematical analysis. The paper has been organized around five themes. Among these themes was that of scientific openness and a clear documentation of the mathematics underlying the software so that the results of any Monte Carlo simulation experiment may be duplicated by any interested investigator in a programming language of his choice. A second theme was the disclosure of the random number generator used in the experiments to provide critical insights as to whether the generated uniform random variables met the criterion of independence satisfactorily. A third theme was that of a review of recent literature in genetics on attempts to find signatures of evolutionary processes such as natural selection, among the millions of segments of DNA in the human genome, that may help guide the search for new drugs to treat diseases. A fourth theme involved formalization of Wright-Fisher processes in a simple form that expedited the writing of software to run Monte Carlo simulation experiments. Also included in this theme was the reporting of several illustrative Monte Carlo simulation experiments for the cases of two and three alleles at some autosomal locus, in which attempts were to made to apply the theory of Wright-Fisher models to gain some understanding as to how evolutionary signatures may have developed in the human genome and those of other diploid species. A fifth theme was centered on recommendations that more demographic factors, such as non-constant population size, be included in future attempts to develop computer models dealing with signatures of evolutionary process in genomes of various species. A brief review of literature on the incorporation of demographic factors into genetic evolutionary models was also included to expedite and
Monte Carlo simulations of Protein Adsorption
NASA Astrophysics Data System (ADS)
Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges
2008-03-01
Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.
Lattice Monte Carlo simulations of polymer melts
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping
2014-12-01
We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.
Borges, C.; Zarza-Moreno, M.; Heath, E.; Teixeira, N.; Vaz, P.
2012-01-15
Purpose: The most recent Varian micro multileaf collimator (MLC), the High Definition (HD120) MLC, was modeled using the BEAMNRC Monte Carlo code. This model was incorporated into a Varian medical linear accelerator, for a 6 MV beam, in static and dynamic mode. The model was validated by comparing simulated profiles with measurements. Methods: The Varian Trilogy (2300C/D) accelerator model was accurately implemented using the state-of-the-art Monte Carlo simulation program BEAMNRC and validated against off-axis and depth dose profiles measured using ionization chambers, by adjusting the energy and the full width at half maximum (FWHM) of the initial electron beam. The HD120 MLC was modeled by developing a new BEAMNRC component module (CM), designated HDMLC, adapting the available DYNVMLC CM and incorporating the specific characteristics of this new micro MLC. The leaf dimensions were provided by the manufacturer. The geometry was visualized by tracing particles through the CM and recording their position when a leaf boundary is crossed. The leaf material density and abutting air gap between leaves were adjusted in order to obtain a good agreement between the simulated leakage profiles and EBT2 film measurements performed in a solid water phantom. To validate the HDMLC implementation, additional MLC static patterns were also simulated and compared to additional measurements. Furthermore, the ability to simulate dynamic MLC fields was implemented in the HDMLC CM. The simulation results of these fields were compared with EBT2 film measurements performed in a solid water phantom. Results: Overall, the discrepancies, with and without MLC, between the opened field simulations and the measurements using ionization chambers in a water phantom, for the off-axis profiles are below 2% and in depth-dose profiles are below 2% after the maximum dose depth and below 4% in the build-up region. On the conditions of these simulations, this tungsten-based MLC has a density of 18.7 g
NASA Astrophysics Data System (ADS)
Terzyk, Artur P.; Furmaniak, Sylwester; Gauden, Piotr A.; Harris, Peter J. F.; Włoch, Jerzy
2008-09-01
Using the plausible model of activated carbon proposed by Harris and co-workers and grand canonical Monte Carlo simulations, we study the applicability of standard methods for describing adsorption data on microporous carbons widely used in adsorption science. Two carbon structures are studied, one with a small distribution of micropores in the range up to 1 nm, and the other with micropores covering a wide range of porosity. For both structures, adsorption isotherms of noble gases (from Ne to Xe), carbon tetrachloride and benzene are simulated. The data obtained are considered in terms of Dubinin-Radushkevich plots. Moreover, for benzene and carbon tetrachloride the temperature invariance of the characteristic curve is also studied. We show that using simulated data some empirical relationships obtained from experiment can be successfully recovered. Next we test the applicability of Dubinin's related models including the Dubinin-Izotova, Dubinin-Radushkevich-Stoeckli, and Jaroniec-Choma equations. The results obtained demonstrate the limits and applications of the models studied in the field of carbon porosity characterization.
Bernal, M A; Bordage, M C; Brown, J M C; Davídková, M; Delage, E; El Bitar, Z; Enger, S A; Francis, Z; Guatelli, S; Ivanchenko, V N; Karamitros, M; Kyriakou, I; Maigne, L; Meylan, S; Murakami, K; Okada, S; Payno, H; Perrot, Y; Petrovic, I; Pham, Q T; Ristic-Fira, A; Sasaki, T; Štěpán, V; Tran, H N; Villagrasa, C; Incerti, S
2015-12-01
Understanding the fundamental mechanisms involved in the induction of biological damage by ionizing radiation remains a major challenge of today's radiobiology research. The Monte Carlo simulation of physical, physicochemical and chemical processes involved may provide a powerful tool for the simulation of early damage induction. The Geant4-DNA extension of the general purpose Monte Carlo Geant4 simulation toolkit aims to provide the scientific community with an open source access platform for the mechanistic simulation of such early damage. This paper presents the most recent review of the Geant4-DNA extension, as available to Geant4 users since June 2015 (release 10.2 Beta). In particular, the review includes the description of new physical models for the description of electron elastic and inelastic interactions in liquid water, as well as new examples dedicated to the simulation of physicochemical and chemical stages of water radiolysis. Several implementations of geometrical models of biological targets are presented as well, and the list of Geant4-DNA examples is described.
NASA Astrophysics Data System (ADS)
Zaim, Ahmed; Kerouad, Mohamed
2010-09-01
A Monte Carlo simulation has been used to study the magnetic properties and the critical behaviors of a single spherical nanoparticle, consisting of a ferromagnetic core of σ=±{1}/{2} spins surrounded by a ferromagnetic shell of S=±1, 0 or S=±{1}/{2}, ±{3}/{2} spins with antiferromagnetic interface coupling, located on a simple cubic lattice. A number of characteristic phenomena has been found. In particular, the effects of the shell coupling and the interface coupling on both the critical and compensation temperatures are investigated. We have found that, for appropriate values of the system parameters, two compensation temperatures may occur in the present system.
Search and Rescue Monte Carlo Simulation.
1985-03-01
confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)
NASA Astrophysics Data System (ADS)
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm-3 and 1.1 g cm-3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; DeMarco, John J.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain
Lin, J. Y. Y.; Aczel, Adam A; Abernathy, Douglas L; Nagler, Stephen E; Buyers, W. J. L.; Granroth, Garrett E
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
NASA Astrophysics Data System (ADS)
Lin, J. Y. Y.; Aczel, A. A.; Abernathy, D. L.; Nagler, S. E.; Buyers, W. J. L.; Granroth, G. E.
2014-04-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of-flight chopper spectrometers [A. A. Aczel et al., Nat. Commun. 3, 1124 (2012), 10.1038/ncomms2117]. These modes are well described by three-dimensional isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states, and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature-dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Monte Carlo simulation for the transport beamline
NASA Astrophysics Data System (ADS)
Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.
2013-07-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Automated Monte Carlo Simulation of Proton Therapy Treatment Plans.
Verburg, Joost Mathijs; Grassberger, Clemens; Dowdell, Stephen; Schuemann, Jan; Seco, Joao; Paganetti, Harald
2016-12-01
Simulations of clinical proton radiotherapy treatment plans using general purpose Monte Carlo codes have been proven to be a valuable tool for basic research and clinical studies. They have been used to benchmark dose calculation methods, to study radiobiological effects, and to develop new technologies such as in vivo range verification methods. Advancements in the availability of computational power have made it feasible to perform such simulations on large sets of patient data, resulting in a need for automated and consistent simulations. A framework called MCAUTO was developed for this purpose. Both passive scattering and pencil beam scanning delivery are supported. The code handles the data exchange between the treatment planning system and the Monte Carlo system, which requires not only transfer of plan and imaging information but also translation of institutional procedures, such as output factor definitions. Simulations are performed on a high-performance computing infrastructure. The simulation methods were designed to use the full capabilities of Monte Carlo physics models, while also ensuring consistency in the approximations that are common to both pencil beam and Monte Carlo dose calculations. Although some methods need to be tailored to institutional planning systems and procedures, the described procedures show a general road map that can be easily translated to other systems.
NASA Astrophysics Data System (ADS)
Cho, Gyu-Seok; Kim, Kum-Bae; Choi, Sang-Hyoun; Song, Yong-Keun; Lee, Soon-Sung
2017-01-01
Recently, Monte Carlo methods have been used to optimize the design and modeling of radiation detectors. However, most Monte Carlo codes have a fixed and simple optical physics, and the effect of the signal readout devices is not considered because of the limitations of the geometry function. Therefore, the disadvantages of the codes prevent the modeling of the scintillator detector. The modeling of a comprehensive and extensive detector system has been reported to be feasible when the optical physics model of the GEomerty ANd Tracking 4 (GEANT 4) simulation code is used. In this study, we performed a Gd2O3:Eu scintillator modelling by using the GEANT4 simulation code and compared the results with the measurement data. To obtain the measurement data for the scintillator, we synthesized the Gd2O3:Eu scintillator by using solution combustion method and we evaluated the characteristics of the scintillator by using X-ray diffraction and photoluminescence. We imported the measured data into the GEANT4 code because GEANT4 cannot simulate a fluorescence phenomenon. The imported data were used as an energy distribution for optical photon generation based on the energy deposited in the scintillator. As a result of the simulation, a strong emission peak consistent with the measured data was observed at 611 nm, and the overall trends of the spectrum agreed with the measured data. This result is significant because the characteristics of the scintillator are equally implemented in the simulation, indicating a valuable improvement in the modeling of scintillator-based radiation detectors.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
NASA Astrophysics Data System (ADS)
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
NASA Astrophysics Data System (ADS)
Feng, Sheng; Fang, Ye; Tam, Ka-Ming; Thakur, Bhupender; Yun, Zhifeng; Tomko, Karen; Moreno, Juana; Ramanujam, Jagannathan; Jarrell, Mark
2013-03-01
The Edwards Anderson model is a typical example of random frustrated system. It has been a long standing problem in computational physics due to its long relaxation time. Some important properties of the low temperature spin glass phase are still poorly understood after decades of study. The recent advances of GPU computing provide a new opportunity to substantially improve the simulations. We developed an MPI-CUDA hybrid code with multi-spin coding for parallel tempering Monte Carlo simulation of Edwards Anderson model. Since the system size is relatively small, and a large number of parallel replicas and Monte Carlo moves are required, the problem suits well for modern GPUs with CUDA architecture. We use the code to perform an extensive simulation on the three-dimensional Edwards Anderson model with an external field. This work is funded by the NSF EPSCoR LA-SiGMA project under award number EPS-1003897. This work is partly done on the machines of Ohio Supercomputer Center.
Monte-Carlo simulation of Callisto's exosphere
NASA Astrophysics Data System (ADS)
Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.
2015-12-01
We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.
Chang, Qiang; Herbst, Eric
2014-06-01
We have designed an improved algorithm that enables us to simulate the chemistry of cold dense interstellar clouds with a full gas-grain reaction network. The chemistry is treated by a unified microscopic-macroscopic Monte Carlo approach that includes photon penetration and bulk diffusion. To determine the significance of these two processes, we simulate the chemistry with three different models. In Model 1, we use an exponential treatment to follow how photons penetrate and photodissociate ice species throughout the grain mantle. Moreover, the products of photodissociation are allowed to diffuse via bulk diffusion and react within the ice mantle. Model 2 is similar to Model 1 but with a slower bulk diffusion rate. A reference Model 0, which only allows photodissociation reactions to occur on the top two layers, is also simulated. Photodesorption is assumed to occur from the top two layers in all three models. We found that the abundances of major stable species in grain mantles do not differ much among these three models, and the results of our simulation for the abundances of these species agree well with observations. Likewise, the abundances of gas-phase species in the three models do not vary. However, the abundances of radicals in grain mantles can differ by up to two orders of magnitude depending upon the degree of photon penetration and the bulk diffusion of photodissociation products. We also found that complex molecules can be formed at temperatures as low as 10 K in all three models.
Monte Carlo simulation of laser attenuation characteristics in fog
NASA Astrophysics Data System (ADS)
Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi
2011-06-01
Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.
Maier, Thomas A; Alvarez, Gonzalo; Summers, Michael Stuart; Schulthess, Thomas C
2010-01-01
Using dynamic cluster quantum Monte Carlo simulations, we study the superconducting behavior of a 1=8 doped two-dimensional Hubbard model with imposed unidirectional stripelike charge-density-wave modulation. We find a significant increase of the pairing correlations and critical temperature relative to the homogeneous system when the modulation length scale is sufficiently large. With a separable form of the irreducible particle-particle vertex, we show that optimized superconductivity is obtained for a moderate modulation strength due to a delicate balance between the modulation enhanced pairing interaction, and a concomitant suppression of the bare particle-particle excitations by a modulation reduction of the quasiparticle weight.
NASA Astrophysics Data System (ADS)
Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.
2015-06-01
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall
Coherent Scattering Imaging Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Hassan, Laila Abdulgalil Rafik
Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness
Monte Carlo simulations of medical imaging modalities
Estes, G.P.
1998-09-01
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.
Zhang, Minhua; Chen, Lihang; Yang, Huaming; Sha, Xijiang; Ma, Jing
2016-07-01
Gibbs ensemble Monte Carlo simulation with configurational bias was employed to study the vapor-liquid equilibrium (VLE) for pure acetic acid and for a mixture of acetic acid and ethylene. An improved united-atom force field for acetic acid based on a Lennard-Jones functional form was proposed. The Lennard-Jones well depth and size parameters for the carboxyl oxygen and hydroxyl oxygen were determined by fitting the interaction energies of acetic acid dimers to the Lennard-Jones potential function. Four different acetic acid dimers and the proportions of them were considered when the force field was optimized. It was found that the new optimized force field provides a reasonable description of the vapor-liquid phase equilibrium for pure acetic acid and for the mixture of acetic acid and ethylene. Accurate values were obtained for the saturated liquid density of the pure compound (average deviation: 0.84 %) and for the critical points. The new optimized force field demonstrated greater accuracy and reliability in calculations of the solubility of the mixture of acetic acid and ethylene as compared with the results obtained with the original TraPPE-UA force field.
Pod generated by Monte Carlo simulation using a meta-model based on the simSUNDT software
NASA Astrophysics Data System (ADS)
Persson, G.; Hammersberg, P.; Wirdelius, H.
2012-05-01
A recent developed numerical procedure for simulation of POD is used to identify the most influential parameters and test the effect of their interaction and variability with different statistical distributions. With a multi-parameter prediction model, based on the NDT simulation software simSUNDT, a qualified ultrasonic procedure of personnel within Swedish nuclear power plants is investigated. The stochastical computations are compared to experimentally based POD and conclusions are drawn for both fatigue and stress corrosion cracks.
NASA Astrophysics Data System (ADS)
WöHling, Thomas; Vrugt, Jasper A.
2011-04-01
In the past two decades significant progress has been made toward the application of inverse modeling to estimate the water retention and hydraulic conductivity functions of the vadose zone at different spatial scales. Many of these contributions have focused on estimating only a few soil hydraulic parameters, without recourse to appropriately capturing and addressing spatial variability. The assumption of a homogeneous medium significantly simplifies the complexity of the resulting inverse problem, allowing the use of classical parameter estimation algorithms. Here we present an inverse modeling study with a high degree of vertical complexity that involves calibration of a 25 parameter Richards'-based HYDRUS-1D model using in situ measurements of volumetric water content and pressure head from multiple depths in a heterogeneous vadose zone in New Zealand. We first determine the trade-off in the fitting of both data types using the AMALGAM multiple objective evolutionary search algorithm. Then we adopt a Bayesian framework and derive posterior probability density functions of parameter and model predictive uncertainty using the recently developed differential evolution adaptive metropolis, DREAMZS adaptive Markov chain Monte Carlo scheme. We use four different formulations of the likelihood function each differing in their underlying assumption about the statistical properties of the error residual and data used for calibration. We show that AMALGAM and DREAMZS can solve for the 25 hydraulic parameters describing the water retention and hydraulic conductivity functions of the multilayer heterogeneous vadose zone. Our study clearly highlights that multiple data types are simultaneously required in the likelihood function to result in an accurate soil hydraulic characterization of the vadose zone of interest. Remaining error residuals are most likely caused by model deficiencies that are not encapsulated by the multilayer model and can not be accessed by the
Monte Carlo simulations on SIMD computer architectures
Burmester, C.P.; Gronsky, R.; Wille, L.T.
1992-03-01
Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.
ERIC Educational Resources Information Center
Hannan, Peter J.; Murray, David M.
1996-01-01
A Monte Carlo study compared performance of linear and logistic mixed-model analyses of simulated community trials having specific event rates, intraclass correlations, and degrees of freedom. Results indicate that in studies with adequate denominator degrees of freedom, the researcher may use either method of analysis, with certain cautions. (SLD)
Parallel Monte Carlo simulation of multilattice thin film growth
NASA Astrophysics Data System (ADS)
Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen
2001-07-01
This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.
A tetrahedron-based inhomogeneous Monte Carlo optical simulator.
Shen, H; Wang, G
2010-02-21
Optical imaging has been widely applied in preclinical and clinical applications. Fifteen years ago, an efficient Monte Carlo program 'MCML' was developed for use with multi-layered turbid media and has gained popularity in the field of biophotonics. Currently, there is an increasingly pressing need for simulating tools more powerful than MCML in order to study light propagation phenomena in complex inhomogeneous objects, such as the mouse. Here we report a tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIM-OS) to address this issue. By modeling an object as a tetrahedron-based inhomogeneous finite-element mesh, TIM-OS can determine the photon-triangle interaction recursively and rapidly. In numerical simulation, we have demonstrated the correctness and efficiency of TIM-OS.
Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.
Leigh, Jessica W; Bryant, David
2015-09-01
Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.
NASA Astrophysics Data System (ADS)
Eising, G.; Kooi, B. J.
2012-06-01
Growth and decay of clusters at temperatures below Tc have been studied for a two-dimensional Ising model for both square and triangular lattices using Monte Carlo (MC) simulations and the enumeration of lattice animals. For the lattice animals, all unique cluster configurations with their internal bonds were identified up to 25 spins for the triangular lattice and up to 29 spins for the square lattice. From these configurations, the critical cluster sizes for nucleation have been determined based on two (thermodynamic) definitions. From the Monte Carlo simulations, the critical cluster size is also obtained by studying the decay and growth of inserted, most compact clusters of different sizes. A good agreement is found between the results from the MC simulations and one of the definitions of critical size used for the lattice animals at temperatures T > ˜0.4 Tc for the square lattice and T > ˜0.2 Tc for the triangular lattice (for the range of external fields H considered). At low temperatures (T ≈ 0.2 Tc for the square lattice and T ≈ 0.1 Tc for the triangular lattice), magic numbers are found in the size distributions during the MC simulations. However, these numbers are not present in the critical cluster sizes based on the MC simulations, as they are present for the lattice animal data. In order to achieve these magic numbers in the critical cluster sizes based on the MC simulation, the temperature has to be reduced further to T ≈ 0.15 Tc for the square lattice. The observed evolution of magic numbers as a function of temperature is rationalized in the present work.
NASA Astrophysics Data System (ADS)
Ono, Kiminori; Matsukawa, Yoshiya; Saito, Yasuhiro; Matsushita, Yohsuke; Aoki, Hideyuki; Era, Koki; Aoki, Takayuki; Yamaguchi, Togo
2015-06-01
This study presents the validity and ability of an aggregate mean free path cluster-cluster aggregation (AMP-CCA) model, which is a direct Monte Carlo simulation, to predict the aggregate morphology with diameters form about 15-200 nm by comparing the particle size distributions (PSDs) with the results of the previous stochastic approach. The PSDs calculated by the AMP-CCA model with the calculated aggregate as a coalesced spherical particle are in reasonable agreement with the results of the previous stochastic model regardless of the initial number concentration of particles. The shape analysis using two methods, perimeter fractal dimension and the shape categories, has demonstrated that the aggregate structures become complex with increasing the initial number concentration of particles. The AMP-CCA model provides a useful tool to calculate the aggregate morphology and PSD with reasonable accuracy.
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Buyukada, Musa
2017-02-01
The aim of present study is to investigate the thermogravimetric behaviour of the co-combustion of hazelnut hull (HH) and coal blends using three approaches: multi non-linear regression (MNLR) modeling based on Box-Behnken design (BBD) (1), optimization based on response surface methodology (RSM) (2), and probabilistic uncertainty analysis based on Monte Carlo simulation as a function of blend ratio, heating rate, and temperature (3). The response variable was predicted by the best-fit MNLR model with a predicted regression coefficient (R(2)pred) of 99.5%. Blend ratio of 90/10 (HH to coal, %wt), temperature of 405°C, and heating rate of 44°Cmin(-1) were determined as RSM-optimized conditions with a mass loss of 87.4%. The validation experiments with three replications were performed for justifying the predicted-mass loss percentage and 87.5%±0.2 of mass loss were obtained under RSM-optimized conditions. The probabilistic uncertainty analysis were performed by using Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Ševecek, Pavel; Broz, Miroslav; Nesvorny, David; Durda, Daniel D.; Asphaug, Erik; Walsh, Kevin J.; Richardson, Derek C.
2016-10-01
Detailed models of asteroid collisions can yield important constrains for the evolution of the Main Asteroid Belt, but the respective parameter space is large and often unexplored. We thus performed a new set of simulations of asteroidal breakups, i.e. fragmentations of intact targets, subsequent gravitational reaccumulation and formation of small asteroid families, focusing on parent bodies with diameters D = 10 km.Simulations were performed with a smoothed-particle hydrodynamics (SPH) code (Benz & Asphaug 1994), combined with an efficient N-body integrator (Richardson et al. 2000). We assumed a number of projectile sizes, impact velocities and impact angles. The rheology used in the physical model does not include friction nor crushing; this allows for a direct comparison to results of Durda et al. (2007). Resulting size-frequency distributions are significantly different from scaled-down simulations with D = 100 km monolithic targets, although they may be even more different for pre-shattered targets.We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions in N-body simulations of small asteroid families. Finally, we discuss various uncertainties related to SPH simulations.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
NASA Astrophysics Data System (ADS)
Madurga, Sergio; Rey-Castro, Carlos; Pastor, Isabel; Vilaseca, Eudald; David, Calin; Garcés, Josep Lluís; Puy, Jaume; Mas, Francesc
2011-11-01
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups.
DeMarco, J J; Cagnon, C H; Cody, D D; Stevens, D M; McCollough, C H; Zankl, M; Angel, E; McNitt-Gray, M F
2007-05-07
The purpose of this work is to examine the effects of patient size on radiation dose from CT scans. To perform these investigations, we used Monte Carlo simulation methods with detailed models of both patients and multidetector computed tomography (MDCT) scanners. A family of three-dimensional, voxelized patient models previously developed and validated by the GSF was implemented as input files using the Monte Carlo code MCNPX. These patient models represent a range of patient sizes and ages (8 weeks to 48 years) and have all radiosensitive organs previously identified and segmented, allowing the estimation of dose to any individual organ and calculation of patient effective dose. To estimate radiation dose, every voxel in each patient model was assigned both a specific organ index number and an elemental composition and mass density. Simulated CT scans of each voxelized patient model were performed using a previously developed MDCT source model that includes scanner specific spectra, including bowtie filter, scanner geometry and helical source path. The scan simulations in this work include a whole-body scan protocol and a thoracic CT scan protocol, each performed with fixed tube current. The whole-body scan simulation yielded a predictable decrease in effective dose as a function of increasing patient weight. Results from analysis of individual organs demonstrated similar trends, but with some individual variations. A comparison with a conventional dose estimation method using the ImPACT spreadsheet yielded an effective dose of 0.14 mSv mAs(-1) for the whole-body scan. This result is lower than the simulations on the voxelized model designated 'Irene' (0.15 mSv mAs(-1)) and higher than the models 'Donna' and 'Golem' (0.12 mSv mAs(-1)). For the thoracic scan protocol, the ImPACT spreadsheet estimates an effective dose of 0.037 mSv mAs(-1), which falls between the calculated values for Irene (0.042 mSv mAs(-1)) and Donna (0.031 mSv mAs(-1)) and is higher relative
Progress on coupling UEDGE and Monte-Carlo simulation codes
Rensink, M.E.; Rognlien, T.D.
1996-08-28
Our objective is to develop an accurate self-consistent model for plasma and neutral sin the edge of tokamak devices such as DIII-D and ITER. The tow-dimensional fluid model in the UEDGE code has been used successfully for simulating a wide range of experimental plasma conditions. However, when the neutral mean free path exceeds the gradient scale length of the background plasma, the validity of the diffusive and inertial fluid models in UEDGE is questionable. In the long mean free path regime, neutrals can be accurately and efficiently described by a Monte Carlo neutrals model. Coupling of the fluid plasma model in UEDGE with a Monte Carlo neutrals model should improve the accuracy of our edge plasma simulations. The results described here used the EIRENE Monte Carlo neutrals code, but since information is passed to and from the UEDGE plasma code via formatted test files, any similar neutrals code such as DEGAS2 or NIMBUS could, in principle, be used.
Papadimitroulas, P; Kagadis, GC; Loudos, G
2014-06-15
Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and
NASA Astrophysics Data System (ADS)
Chen, Dongsheng; Zeng, Nan; Liu, Celong; Ma, Hui
2012-12-01
In this paper, we present a new method to simulate the signal of polarization-sensitive optical coherence tomography (for short, PS-OCT) by the use of the sphere cylinder birefringence Monte Carlo program developed by our laboratory. Using the program, we can simulate various turbid media based on different optical models and analyze the scattering and polarization information of the simulated media. The detecting area and angle range and the scattering times of the photons are three main conditions we can use to screen out the photons which contribute to the signal of PS-OCT, and in this paper, we study the effects of these three factors on simulation results using our program, and find that the scattering times of the photon is the main factor to affect the signal, and the detecting area and angle range are less important but necessary conditions. In order to test and verify the feasibility of our simulation, we use two methods as a reference. One is called Extended Huygens Fresnel (for short, EHF) method, which is based on electromagnetism theory and can describe the single scattering and multiple scattering of light. By comparison of the results obtained from EHF method and ours, we explore the screening regularities of the photons in the simulation. Meanwhile, we also compare our simulation with another polarization related simulation presented by a Russian group, and our experimental results. Both the comparisons find that our simulation is efficient for PS-OCT at the superficial depth range, and should be further corrected in order to simulate the signal of PS-OCT at deeper depth.
Chen, Yunjie; Roux, Benoît
2015-08-11
Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up
Mile, Viktória; Gereben, Orsolya; Kohara, Shinji; Pusztai, László
2012-08-16
A detailed study of the microscopic structure of two electrolyte solutions, cesium fluoride (CsF) and cesium iodide (CsI) in water, is presented. For revealing the influence of salt concentration on the structure, CsF solutions at concentrations of 15.1 and 32.3 mol % and CsI solutions at concentrations of 1.0 and 3.9 mol % are investigated. For each concentration, we combine total scattering structure factors from neutron and X-ray diffraction and 10 partial radial distribution functions from molecular dynamics simulations in one single structural model, generated by reverse Monte Carlo modeling. For the present solutions we show that the level of consistency between simulations that use simple pair potentials and experimental structure factors is at least semiquantitative for even the extremely highly concentrated CsF solutions. Remaining inconsistencies seem to be caused primarily by water-water distribution functions, whereas slightly problematic parts appear on the ion-oxygen partials, too. As a final result, we obtained particle configurations from reverse Monte Carlo modeling that were in quantitative agreement with both sets of diffraction data and most of the MD simulated partial radial distribution functions. From the particle coordinates, distributions of the number of first neighbors as well as angular correlation functions were calculated. The average number of water molecules around cations in both materials decreases from about 8.0 to about 5.1 as concentration increases, whereas the same quantity for the anions (X) changes from about 5.3 to about 3.7 in the case of CsF and from about 6.2 to about 4.0 in the case of CsI. The average angle of X···H-O particle arrangements, characteristic of anion-water hydrogen bonds, is closer to 180° than that found for O···H-O arrangements (water-water hydrogen bonds) at higher concentrations.
Mohammadyari, P; Faghihi, R; Shirazi, M Mosleh; Lotfi, M; Meigooni, A
2014-06-01
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7±5.7 and 7.7±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry
Choi, Myunghee; Chan, Vincent S.
2014-02-28
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
McNally, Kevin; Cotton, Richard; Cocker, John; Jones, Kate; Bartels, Mike; Rick, David; Price, Paul; Loizou, George
2012-01-01
There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure. PMID:22719759
McNally, Kevin; Cotton, Richard; Cocker, John; Jones, Kate; Bartels, Mike; Rick, David; Price, Paul; Loizou, George
2012-01-01
There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure.
Error propagation in first-principles kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Matera, Sebastian
2017-04-01
First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.
NASA Astrophysics Data System (ADS)
Wilson, J. A.; Richardson, J. A.
2015-12-01
Traditional methods used to calculate recurrence rate of volcanism, such as linear regression, maximum likelihood and Weibull-Poisson distributions, are effective at estimating recurrence rate and confidence level, but these methods are unable to estimate uncertainty in recurrence rate through time. We propose a new model for estimating recurrence rate and uncertainty, Volcanic Event Recurrence Rate Model. VERRM is an algorithm that incorporates radiometric ages, volcanic stratigraphy and paleomagnetic data into a Monte Carlo simulation, generating acceptable ages for each event. Each model run is used to calculate recurrence rate using a moving average window. These rates are binned into discrete time intervals and plotted using the 5th, 50th and 95th percentiles. We present recurrence rates from Cima Volcanic Field (CA), Yucca Mountain (NV) and Arsia Mons (Mars). Results from Cima Volcanic Field illustrate how several K-Ar ages with large uncertainties obscure three well documented volcanic episodes. Yucca Mountain results are similar to published rates and illustrate the use of using the same radiometric age for multiple events in a spatially defined cluster. Arsia Mons results show a clear waxing/waning of volcanism through time. VERRM output may be used for a spatio-temporal model or to plot uncertainty in quantifiable parameters such as eruption volume or geochemistry. Alternatively, the algorithm may be reworked to constrain geomagnetic chrons. VERRM is implemented in Python 2.7 and takes advantage of NumPy, SciPy and matplotlib libraries for optimization and quality plotting presentation. A typical Monte Carlo simulation of 40 volcanic events takes a few minutes to couple hours to complete, depending on the bin size used to assign ages.
Resist develop prediction by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun
2002-07-01
Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
NASA Technical Reports Server (NTRS)
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
Wiebe, J; Ploquin, N
2014-08-15
Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4 (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.
NASA Astrophysics Data System (ADS)
Fougere, Nicolas; altwegg, kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Bockelee-Morvan, Dominique; Calmonte, Ursina; Capaccioni, Fabrizio; Combi, Michael R.; De Keyser, Johan; Debout, Vincent; Erard, Stéphane; Fiethe, Björn; Filacchione, Gianrico; Fink, Uwe; Fuselier, Stephen; Gombosi, T. I.; Hansen, Kenneth C.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Migliorini, Alessandra; Piccioni, Giuseppe; Rinaldi, Giovanna; Rubin, Martin; Shou, Yinsi; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; VIRTIS Team and ROSINA Team
2016-10-01
During the past few decades, modeling of cometary coma has known tremendous improvements notably with the increase of computer capacity. While the Haser model is still widely used for interpretation of cometary observations, its rather simplistic assumptions such as spherical symmetry and constant outflow velocity prevent it to explain some of the coma observations. Hence, more complex coma models have emerged taking full advantage of the numerical approach. The only method that can resolve all the flow regimes encountered in the coma due to the drastic changes of Knudsen numbers is the Direct Simulation Monte-Carlo (DSMC) approach.The data acquired by the instruments on board of the Rosetta spacecraft provides a large amount of observations regarding the spatial and temporal variations of comet 67P/Churyumov-Gerasimenko's coma. These measurements provide constraints that can be applied to the coma model in order to describe best the rarefied atmosphere of 67P. We present the last results of our 3D multi-species DSMC model using the Adaptive Mesh Particle Simulator (Tenishev et al. 2008 and 2011, Fougere 2014). The model uses a realistic nucleus shape model from the OSIRIS team and takes into account the self-shadowing created by its concavities. The gas flux at the surface of the nucleus is deduced from the relative orientation with respect to the Sun and an activity distribution that enables to simulate both the non-uniformity of the surface activity and the heterogeneities of the outgassing.The model results are compared to the ROSINA and VIRTIS observations. Progress in incorporating Rosetta measurements from the last half of the mission into our models will be presented. The good agreement between the model and these measurements from two very different techniques reinforces our understanding of the physical processes taking place in the coma.
McMillan, K; Bostani, M; McNitt-Gray, M; McCollough, C
2015-06-15
Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ramachandra, Ranjith M.; Panse, Ashish; Balla, Deepika; Yan, Jianhua; Carson, Richard E.
2010-04-01
We previously designed a component based 3-D PSF model to obtain a compact yet accurate system matrix for a dedicated human brain PET scanner. In this work, we adapted the model to a small animal PET scanner. Based on the model, we derived the system matrix for back-to-back gamma source in air, fluorine-18 and iodine-124 source in water by Monte Carlo simulation. The characteristics of the PSF model were evaluated and the performance of the newly derived system matrix was assessed by comparing its reconstructed images with the established reconstruction program provided on the animal PET scanner. The new system matrix showed strong PSF dependency on the line-of-response (LOR) incident angle and LOR depth. This confirmed the validity of the two components selected for the model. The effect of positron range on the system matrix was observed by comparing the PSFs of different isotopes. A simulated and an experimental hot-rod phantom study showed that the reconstruction with the proposed system matrix achieved better resolution recovery as compared to the algorithm provided by the manufacturer. Quantitative evaluation also showed better convergence to the expected contrast value at similar noise level. In conclusion, it has been shown that the system matrix derivation method is applicable to the animal PET system studied, suggesting that the method may be used for other PET systems and different isotope applications.
Monte Carlo Simulation of Surface Reactions
NASA Astrophysics Data System (ADS)
Brosilow, Benjamin J.
A Monte-Carlo study of the catalytic reaction of CO and O_2 over transition metal surfaces is presented, using generalizations of a model proposed by Ziff, Gulari and Barshad (ZGB). A new "constant -coverage" algorithm is described and applied to the model in order to elucidate the behavior near the model's first -order transition, and to draw an analogy between this transition and first-order phase transitions in equilibrium systems. The behavior of the model is then compared to the behavior of CO oxidation systems over Pt single-crystal catalysts. This comparison leads to the introduction of a new variation of the model in which one of the reacting species requires a large ensemble of vacant surface sites in order to adsorb. Further, it is shown that precursor adsorption and an effective Eley-Rideal mechanism must also be included in the model in order to obtain detailed agreement with experiment. Finally, variations of the model on finite and two component lattices are studied as models for low temperature CO oxidation over Noble Metal/Reducible Oxide and alloy catalysts.
Accelerated Monte Carlo simulations with restricted Boltzmann machines
NASA Astrophysics Data System (ADS)
Huang, Li; Wang, Lei
2017-01-01
Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.
Leonhard, Kai; Prausnitz, John M.; Radke, Clayton J.
2004-01-01
Amino acid residue–solvent interactions are required for lattice Monte Carlo simulations of model proteins in water. In this study, we propose an interaction-energy scale that is based on the interaction scale by Miyazawa and Jernigan. It permits systematic variation of the amino acid–solvent interactions by introducing a contrast parameter for the hydrophobicity, Cs, and a mean attraction parameter for the amino acids, ω. Changes in the interaction energies strongly affect many protein properties. We present an optimized energy parameter set for best representing realistic behavior typical for many proteins (fast folding and high cooperativity for single chains). Our optimal parameters feature a much weaker hydrophobicity contrast and mean attraction than does the original interaction scale. The proposed interaction scale is designed for calculating the behavior of proteins in bulk and at interfaces as a function of solvent characteristics, as well as protein size and sequence. PMID:14739322
NASA Astrophysics Data System (ADS)
Shi, Feng; Wang, Dezhen; Ren, Chunsheng
2008-06-01
Atmospheric pressure discharge nonequilibrium plasmas have been applied to plasma processing with modern technology. Simulations of discharge in pure Ar and pure He gases at one atmospheric pressure by a high voltage trapezoidal nanosecond pulse have been performed using a one-dimensional particle-in-cell Monte Carlo collision (PIC-MCC) model coupled with a renormalization and weighting procedure (mapping algorithm). Numerical results show that the characteristics of discharge in both inert gases are very similar. There exist the effects of local reverse field and double-peak distributions of charged particles' density. The electron and ion energy distribution functions are also observed, and the discharge is concluded in the view of ionization avalanche in number. Furthermore, the independence of total current density is a function of time, but not of position.
Dong, Jing; Xiong, Wei; Chen, Yuancheng; Zhao, Yunfeng; Lu, Yang; Zhao, Di; Li, Wenyan; Liu, Yanhui; Chen, Xijing
2016-03-01
In this study, a population pharmacokinetic (PPK) model of biapenem in Chinese patients with lower respiratory tract infections (LRTIs) was developed and optimal dosage regimens based on Monte Carlo simulation were proposed. A total of 297 plasma samples from 124 Chinese patients were assayed chromatographically in a prospective, single-centre, open-label study, and pharmacokinetic parameters were analysed using NONMEN. Creatinine clearance (CLCr) was found to be the most significant covariate affecting drug clearance. The final PPK model was: CL (L/h)=9.89+(CLCr-66.56)×0.049; Vc (L)=13; Q (L/h)=8.74; and Vp (L)=4.09. Monte Carlo simulation indicated that for a target of ≥40% T>MIC (duration that the plasma level exceeds the causative pathogen's MIC), the biapenem pharmacokinetic/pharmacodynamic (PK/PD) breakpoint was 4μg/mL for doses of 0.3g every 6h (3-h infusion) and 1.2g (24-h continuous infusion). For a target of ≥80% T>MIC, the PK/PD breakpoint was 4μg/mL for a dose of 1.2g (24-h continuous infusion). The probability of target attainment (PTA) could not achieve ≥90% at the usual biapenem dosage regimen (0.3g every 12h, 0.5-h infusion) when the MIC of the pathogenic bacteria was 4μg/mL, which most likely resulted in unsatisfactory clinical outcomes in Chinese patients with LRTIs. Higher doses and longer infusion time would be appropriate for empirical therapy. When the patient's symptoms indicated a strong suspicion of Pseudomonas aeruginosa or Acinetobacter baumannii infection, it may be more appropriate for combination therapy with other antibacterial agents.
Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.
2013-01-15
Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.
NASA Astrophysics Data System (ADS)
Soligo, Riccardo
In this work, the insight provided by our sophisticated Full Band Monte Carlo simulator is used to analyze the behavior of state-of-art devices like GaN High Electron Mobility Transistors and Hot Electron Transistors. Chapter 1 is dedicated to the description of the simulation tool used to obtain the results shown in this work. Moreover, a separate section is dedicated the set up of a procedure to validate to the tunneling algorithm recently implemented in the simulator. Chapter 2 introduces High Electron Mobility Transistors (HEMTs), state-of-art devices characterized by highly non linear transport phenomena that require the use of advanced simulation methods. The techniques for device modeling are described applied to a recent GaN-HEMT, and they are validated with experimental measurements. The main techniques characterization techniques are also described, including the original contribution provided by this work. Chapter 3 focuses on a popular technique to enhance HEMTs performance: the down-scaling of the device dimensions. In particular, this chapter is dedicated to lateral scaling and the calculation of a limiting cutoff frequency for a device of vanishing length. Finally, Chapter 4 and Chapter 5 describe the modeling of Hot Electron Transistors (HETs). The simulation approach is validated by matching the current characteristics with the experimental one before variations of the layouts are proposed to increase the current gain to values suitable for amplification. The frequency response of these layouts is calculated, and modeled by a small signal circuit. For this purpose, a method to directly calculate the capacitance is developed which provides a graphical picture of the capacitative phenomena that limit the frequency response in devices. In Chapter 5 the properties of the hot electrons are investigated for different injection energies, which are obtained by changing the layout of the emitter barrier. Moreover, the large signal characterization of the
Monte Carlo simulations of particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.
1994-01-01
The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.
Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.
Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J
2007-10-21
A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in
Morphological evolution of growing crystals - A Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz
1988-01-01
The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.
NASA Astrophysics Data System (ADS)
Monceau, Pascal
2006-09-01
The extension of the phase diagram of the q -state Potts model to noninteger dimension is investigated by means of Monte Carlo simulations on Sierpinski and Menger fractal structures. Both multicanonical and canonical simulations have been carried out with the help of the Wang-Landau and the Wolff cluster algorithms. Lower bounds are provided for the critical values qc of q where a first-order transition is expected in the cases of two structures whose fractal dimension is smaller than 2: The transitions associated with the seven-state and ten-state Potts models on Sierpinski carpets with fractal dimensions df≃1.8928 and df≃1.7925 , respectively, are shown to be second-order ones, the renormalization eigenvalue exponents yh are calculated, and bounds are provided for the renormalization eigenvalue exponents yt and the critical temperatures. Moreover, the results suggest that second-order transitions are expected to occur for very large values of q when the fractal dimension is lowered below 2—that is, in the case of hierarchically weakly connected systems with an infinite ramification order. At last, the transition associated with the four-state Potts model on a fractal structure with a dimension df≃2.631 is shown to be a weakly first-order one.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear
Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography
Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.
2000-02-01
The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.
Direct simulation Monte Carlo investigation of hydrodynamic instabilities in gases
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.; Plimpton, S. J.
2016-11-01
The Rayleigh-Taylor instability (RTI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, two-dimensional and three-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode-perturbed interfaces between two atmospheric-pressure monatomic gases. The DSMC simulations reproduce all qualitative features of the RTI and are in reasonable quantitative agreement with existing theoretical and empirical models in the linear, nonlinear, and self-similar regimes. At late times, the instability is seen to exhibit a self-similar behavior, in agreement with experimental observations. For the conditions simulated, diffusion can influence the initial instability growth significantly.
Shin, J; Merchant, T E; Lee, S; Li, Z; Shin, D; Farr, J B
2015-06-15
Purpose: To reconstruct phase-space information upstream of patient specific collimators for Monte Carlo simulations using only radiotherapy planning system data. Methods: The proton energies are calculated based on residual ranges, e.g., sum of prescribed ranges in a patient and SSD. The Kapchinskij and Vladimirskij (KV) distribution was applied to sample proton’s x-y positions and momentum direction and the beam shape was assumed to be a circle. Free parameters, e.g., the initial energy spread and the emittance of KV distribution were estimated from the benchmarking with commissioning data in a commercial treatment planning system for an operational proton therapy center. The number of histories, which defines the height of individual pristine Bragg peaks (BP) of Spread-out Bragg peak (SOBP), are weighted based on beam current modulation and a correction factor is applied to take into account the fluence reduction as the residual range decreases due to the rotation of the range modulator wheel. The timedependent behaviors, e.g., the changes of the residual range and histories per a pristine BP, are realized by utilizing TOPAS (Tool for Particle Simulation). Results: Benchmarking simulations for selected SOBPs ranging 7.5 cm to 15.5 cm matched within 2 mm in range and up to 5 mm in SOBP width against measurement data in water phantom. We found this model tends to underestimate entrance dose by about 5 % in comparison to measurement. This was attributed to the situation that the energy distribution used in the model was limited in its granularity at the limit of single energy spectrum for the narrow angle modulator steps used in the proximal pull back region of the SOBPs. Conclusion: Within these limitations the source modeling method proved itself an acceptable alternative of a full treatment head simulation when the machine geometry and materials information are not available.
A New Approach to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Estimating return period of landslide triggering by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Peres, D. J.; Cancelliere, A.
2016-10-01
Assessment of landslide hazard is a crucial step for landslide mitigation planning. Estimation of the return period of slope instability represents a quantitative method to map landslide triggering hazard on a catchment. The most common approach to estimate return periods consists in coupling a triggering threshold equation, derived from an hydrological and slope stability process-based model, with a rainfall intensity-duration-frequency (IDF) curve. Such a traditional approach generally neglects the effect of rainfall intensity variability within events, as well as the variability of initial conditions, which depend on antecedent rainfall. We propose a Monte Carlo approach for estimating the return period of shallow landslide triggering which enables to account for both variabilities. Synthetic hourly rainfall-landslide data generated by Monte Carlo simulations are analysed to compute return periods as the mean interarrival time of a factor of safety less than one. Applications are first conducted to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to evaluate the traditional IDF-based method by comparison with the Monte Carlo one. Results show that return period is affected significantly by variability of both rainfall intensity within events and of initial conditions, and that the traditional IDF-based approach may lead to an overestimation of the return period of landslide triggering, or, in other words, a non-conservative assessment of landslide hazard.
NASA Astrophysics Data System (ADS)
Shrestha, Suman; Vedantham, Srinivasan; Karellas, Andrew
2017-03-01
In digital breast tomosynthesis and digital mammography, the x-ray beam filter material and thickness vary between systems. Replacing K-edge filters with Al was investigated with the intent to reduce exposure duration and to simplify system design. Tungsten target x-ray spectra were simulated with K-edge filters (50 µm Rh; 50 µm Ag) and Al filters of varying thickness. Monte Carlo simulations were conducted to quantify the x-ray scatter from various filters alone, scatter-to-primary ratio (SPR) with compressed breasts, and to determine the radiation dose to the breast. These data were used to analytically compute the signal-difference-to-noise ratio (SDNR) at unit (1 mGy) mean glandular dose (MGD) for W/Rh and W/Ag spectra. At SDNR matched between K-edge and Al filtered spectra, the reductions in exposure duration and MGD were quantified for three strategies: (i) fixed Al thickness and matched tube potential in kilovolts (kV); (ii) fixed Al thickness and varying the kV to match the half-value layer (HVL) between Al and K-edge filtered spectra; and, (iii) matched kV and varying the Al thickness to match the HVL between Al and K-edge filtered spectra. Monte Carlo simulations indicate that the SPR with and without the breast were not different between Al and K-edge filters. Modelling for fixed Al thickness (700 µm) and kV matched to K-edge filtered spectra, identical SDNR was achieved with 37–57% reduction in exposure duration and with 2–20% reduction in MGD, depending on breast thickness. Modelling for fixed Al thickness (700 µm) and HVL matched by increasing the kV over (0,4) range, identical SDNR was achieved with 62–65% decrease in exposure duration and with 2–24% reduction in MGD, depending on breast thickness. For kV and HVL matched to K-edge filtered spectra by varying Al filter thickness over (700, 880) µm range, identical SDNR was achieved with 23–56% reduction in exposure duration and 2–20% reduction in MGD, depending on breast thickness
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Wu, H; Baynes, R E; Leavens, T; Tell, L A; Riviere, J E
2013-06-01
The objective of this study was to develop a population pharmacokinetic (PK) model and predict tissue residues and the withdrawal interval (WDI) of flunixin in cattle. Data were pooled from published PK studies in which flunixin was administered through various dosage regimens to diverse populations of cattle. A set of liver data used to establish the regulatory label withdrawal time (WDT) also were used in this study. Compartmental models with first-order absorption and elimination were fitted to plasma and liver concentrations by a population PK modeling approach. Monte Carlo simulations were performed with the population mean and variabilities of PK parameters to predict liver concentrations of flunixin. The PK of flunixin was described best by a 3-compartment model with an extra liver compartment. The WDI estimated in this study with liver data only was the same as the label WDT. However, a longer WDI was estimated when both plasma and liver data were included in the population PK model. This study questions the use of small groups of healthy animals to determine WDTs for drugs intended for administration to large diverse populations. This may warrant a reevaluation of the current procedure for establishing WDT to prevent violative residues of flunixin.
NASA Technical Reports Server (NTRS)
Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.
2003-01-01
The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.
Lou, K; Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y; Clark, J
2014-06-01
Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is
NASA Technical Reports Server (NTRS)
Combi, Michael R.
2004-01-01
In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic (MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important. At the University of Michigan we have an established base of experience and expertise in numerical simulations based on particle codes which address these physical regimes. The Principal Investigator, Dr. Michael Combi, has over 20 years of experience in the development of particle-kinetic and hybrid kinetichydrodynamics models and their direct use in data analysis. He has also worked in ground-based and space-based remote observational work and on spacecraft instrument teams. His research has involved studies of cometary atmospheres and ionospheres and their interaction with the solar wind, the neutral gas clouds escaping from Jupiter s moon Io, the interaction of the atmospheres/ionospheres of Io and Europa with Jupiter s corotating magnetosphere, as well as Earth s ionosphere. This report describes our progress during the year. The contained in section 2 of this report will serve as the basis of a paper describing the method and its application to the cometary coma that will be continued under a research and analysis grant that supports various applications of theoretical comet models to understanding the
Monte Carlo modeling and meteor showers
NASA Technical Reports Server (NTRS)
Kulikova, N. V.
1987-01-01
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.
Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters
Brandt, D.; Asai, M.; Brink, P.L.; Cabrera, B.; Silva, E.do Couto e; Kelsey, M.; Leman, S.W.; McArthy, K.; Resch, R.; Wright, D.; Figueroa-Feliciano, E.; /MIT
2012-06-12
There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature, focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.
Technology Transfer Automated Retrieval System (TEKTRAN)
A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
NASA Astrophysics Data System (ADS)
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
Cluster Monte Carlo simulations of the nematic-isotropic transition
NASA Astrophysics Data System (ADS)
Priezjev, N. V.; Pelcovits, Robert A.
2001-06-01
We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.
Cluster Monte Carlo simulations of the nematic-isotropic transition.
Priezjev, N V; Pelcovits, R A
2001-06-01
We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.
Efficient kinetic Monte Carlo simulation of annealing in semiconductor materials
NASA Astrophysics Data System (ADS)
Hargrove, Paul Hamilton
As the semiconductor manufacturing industry advances, the length scales of devices are shrinking rapidly, in accordance with the predictions of Moore's Law. As the device dimensions shrink the importance of predictive process modeling to the development of the production process is growing. Of particular importance are predictive models which can be applied to process conditions not easily accessible via experiment. Therefore the importance of models based on physical understanding are gaining importance versus models based on empirical fits alone. One promising research area in physical-based models is kinetic Monte Carlo (kMC) modeling of atomistic processes. This thesis explores kMC modeling of annealing and diffusion processes. After providing the necessary background to understand and motivate the research, a detailed review of simulation using this class of models is presented which exposes the motivation for using these models and establishes the state of the field. The author provides a user's manual for ANISRA ( ANnealIng Simulation libRAry), a computer code for on-lattice kMC simulations. This library is intended as a reusable tool for the development of simulation codes for atomistic models covering a wide variety of problems. Thus care has been taken to separate the core functionality of a simulation from the specification of the model. This thesis also compares the performance of data structures for the kMC simulation problem and recommends some novel approaches. These recommendations are applicable to a wider class of model than is ANISRA, and thus of potential interest even to researchers who implement their own simulators. Three example simulations are built from ANISRA and are presented to show the applicability of this class of model to problems of interest in semiconductor process modeling. The differences between the models simulated display the versatility of the code library. The small amount of code written to construct and modify these
Monte Carlo simulation of quantum Zeno effect in the brain
NASA Astrophysics Data System (ADS)
Georgiev, Danko
2015-12-01
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
NASA Astrophysics Data System (ADS)
Stratis, A.; Zhang, G.; Jacobs, R.; Bogaerts, R.; Bosmans, H.
2016-12-01
In order to carry out Monte Carlo (MC) dosimetry studies, voxel phantoms, modeling human anatomy, and organ-based segmentation of CT image data sets are applied to simulation frameworks. The resulting voxel phantoms preserve patient CT acquisition geometry; in the case of head voxel models built upon head CT images, the head support with which CT scanners are equipped introduces an inclination to the head, and hence to the head voxel model. In dental cone beam CT (CBCT) imaging, patients are always positioned in such a way that the Frankfort line is horizontal, implying that there is no head inclination. The orientation of the head is important, as it influences the distance of critical radiosensitive organs like the thyroid and the esophagus from the x-ray tube. This work aims to propose a procedure to adjust head voxel phantom orientation, and to investigate the impact of head inclination on organ doses in dental CBCT MC dosimetry studies. The female adult ICRP, and three in-house-built paediatric voxel phantoms were in this study. An EGSnrc MC framework was employed to simulate two commonly used protocols; a Morita Accuitomo 170 dental CBCT scanner (FOVs: 60 × 60 mm2 and 80 × 80 mm2, standard resolution), and a 3D Teeth protocol (FOV: 100 × 90 mm2) in a Planmeca Promax 3D MAX scanner. Result analysis revealed large absorbed organ dose differences in radiosensitive organs between the original and the geometrically corrected voxel models of this study, ranging from -45.6% to 39.3%. Therefore, accurate dental CBCT MC dose calculations require geometrical adjustments to be applied to head voxel models.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Kern, Christoph
2016-03-23
This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.
NASA Astrophysics Data System (ADS)
Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael
2013-05-01
In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.
Brolin, Gustav; Gleisner, Katarina Sjögreen; Ljungberg, Michael
2013-05-21
In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for (99m)Tc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.
NASA Astrophysics Data System (ADS)
Lamperski, S.; Płuciennik, M.
2011-01-01
The recently developed inverse grand-canonical Monte Carlo technique (IGCMC) (S. Lamperski. Molecular Simulation 33, 1193 (2007)) and the MSA theory are applied to calculate the individual activity coefficients of ions and solvent for a solvent primitive model (SPM) electrolyte. In the SPM electrolyte model the anions, cations and solvent molecules are represented by hard spheres immersed in a dielectric continuum whose permittivity is equal to that of the solvent. The ions have a point electric charge embedded at the centre. A simple 1:1 aqueous electrolyte is considered. The ions are hydrated while the water molecules form clusters modelled by hard spheres of diameter d s. The diameter d s depends on the dissolved salt and is determined by fitting the mean activity coefficient ln γ ± calculated from IGCMC and from the MSA to the experimental data. A linear correlation is observed between d s and the Marcus parameter ΔG HB, which describes the ion influence on the water association.
Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor
2015-03-01
Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.
Kinetic Monte Carlo simulation of titin unfolding
NASA Astrophysics Data System (ADS)
Makarov, Dmitrii E.; Hansma, Paul K.; Metiu, Horia
2001-06-01
Recently, it has become possible to unfold a single protein molecule titin, by pulling it with an atomic-force-microscope tip. In this paper, we propose and study a stochastic kinetic model of this unfolding process. Our model assumes that each immunoglobulin domain of titin is held together by six hydrogen bonds. The external force pulls on these bonds and lowers the energy barrier that prevents the hydrogen bond from breaking; this increases the rate of bond breaking and decreases the rate of bond healing. When all six bonds are broken, the domain unfolds. Since the experiment controls the pulling rate, not the force, the latter is calculated from a wormlike chain model for the protein. In the limit of high pulling rate, this kinetic model is solved by a novel simulation method. In the limit of low pulling rate, we develop a quasiequilibrium rate theory, which is tested by simulations. The results are in agreement with the experiments: the distribution of the unfolding force and the dependence of the mean unfolding force on the pulling rate are similar to those measured. The simulations also explain why the work of the force to break bonds is less than the bond energy and why the breaking-force distribution varies from sample to sample. We suggest that one can synthesize polymers that are well described by our model and that they may have unusual mechanical properties.
NASA Astrophysics Data System (ADS)
Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.
2016-12-01
Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Monte Carlo simulation of laser beam scattering by water droplets
NASA Astrophysics Data System (ADS)
Wang, Biao; Tong, Guang-de; Lin, Jia-xuan
2013-09-01
Monte Carlo simulation of laser beam scattering in discrete water droplets is present and the temporal profile of LIDAR signal scattered from random distributed water droplets such as raindrop and fog is acquired. A photon source model is developed in the simulation for laser beam of arbitrary intensity distribution. Mie theory and geometrical optics approximation is used to calculate optical parameters, such as scattering coefficient, Aledo and average asymmetry factor, for water droplets of variable size with gamma distribution. The scattering angle is calculated using the probability distribution given by Henyey-Greenstein phase function. The model solving semi-infinite homogeneous media problem is capable of handling a variety of geometries and arbitrary spatio-temporal pulse profiles.
Papadimitroulas, P; Kostou, T; Kagadis, G; Loudos, G
2015-06-15
Purpose: The purpose of the present study was to quantify, evaluate the impact of cardiac and respiratory motion on clinical nuclear imaging protocols. Common SPECT and scintigraphic scans are studied using Monte Carlo (MC) simulations, comparing the resulted images with and without motion. Methods: Realistic simulations were executed using the GATE toolkit and the XCAT anthropomorphic phantom as a reference model for human anatomy. Three different radiopharmaceuticals based on 99mTc were studied, namely 99mTc-MDP, 99mTc—N—DBODC and 99mTc—DTPA-aerosol for bone, myocardium and lung scanning respectively. The resolution of the phantom was set to 3.5 mm{sup 3}. The impact of the motion on spatial resolution was quantified using a sphere with 3.5 mm diameter and 10 separate time frames, in the ECAM modeled SPECT scanner. Finally, respiratory motion impact on resolution and imaging of lung lesions was investigated. The MLEM algorithm was used for data reconstruction, while the literature derived biodistributions of the pharmaceuticals were used as activity maps in the simulations. Results: FWHM was extracted for a static and a moving sphere which was ∼23 cm away from the entrance of the SPECT head. The difference in the FWHM was 20% between the two simulations. Profiles in thorax were compared in the case of bone scintigraphy, showing displacement and blurring of the bones when respiratory motion was inserted in the simulation. Large discrepancies were noticed in the case of myocardium imaging when cardiac motion was incorporated during the SPECT acquisition. Finally the borders of the lungs are blurred when respiratory motion is included resulting to a dislocation of ∼2.5 cm. Conclusion: As we move to individualized imaging and therapy procedures, quantitative and qualitative imaging is of high importance in nuclear diagnosis. MC simulations combined with anthropomorphic digital phantoms can provide an accurate tool for applications like motion correction
On the time scale associated with Monte Carlo simulations
Bal, Kristof M. Neyts, Erik C.
2014-11-28
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.
On the time scale associated with Monte Carlo simulations.
Bal, Kristof M; Neyts, Erik C
2014-11-28
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.
James Webb Space Telescope (JWST) Stationkeeping Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra; Yu, Wayne
2014-01-01
The James Webb Space Telescope (JWST) will launch in 2018 into a Libration Point Orbit (LPO) around the Sun-EarthMoon (SEM) L2 point, with a planned mission lifetime of 11 years. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.
2014-01-01
The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
The t-J model of hard-core bosons in slave-particle representation and its Monte-Carlo simulations
NASA Astrophysics Data System (ADS)
Nakano, Yuki; Ishima, Takumi; Kobayashi, Naohiro; Sakakibara, Kazuhiko; Ichinose, Ikuo; Matsui, Tetsuo
2012-12-01
We study the system of hard-core bosons (HCB) with two species in the three-dimensional lattice at finite temperatures. In the strong-correlation limit, the system becomes the bosonic t-J model, that is, the t-J model of “bosonic electrons”. The bosonic “electron” operator Bxσ at the site x with a two-component spin σ(= 1, 2***) is treated as a HCB operator, and represented by a composite of two slave particles; a spinon described by a Schwinger boson (CP1 boson) zxσ and a holon described by a HCB field φx as Bxσ = φ†xzxσ.*** This φx is again represented by another CP1 quasi-spinon operator ωxa*** (a = 1, 2***). The phase diagrams of the resulting double CP1 system obtained by Monte Carlo simulations involve first-order and second-order phase boundaries. We present in detail the techniques and algorithm to reduce the hysteresis and locate the first-order transition points.
Choi, M.; Chan, V. S.; Lao, L. L.; Pinsker, R. I.; Green, D.; Berry, L. A.; Jaeger, F.; Park, J. M.; Heidbrink, W. W.; Liu, D.; Podesta, M.; Harvey, R.; Smithe, D. N.; Bonoli, P.
2010-05-15
The five-dimensional finite-orbit Monte Carlo code ORBIT-RF[M. Choi et al., Phys. Plasmas 12, 1 (2005)] is successfully coupled with the two-dimensional full-wave code all-orders spectral algorithm (AORSA) [E. F. Jaeger et al., Phys. Plasmas 13, 056101 (2006)] in a self-consistent way to achieve improved predictive modeling for ion cyclotron resonance frequency (ICRF) wave heating experiments in present fusion devices and future ITER [R. Aymar et al., Nucl. Fusion 41, 1301 (2001)]. The ORBIT-RF/AORSA simulations reproduce fast-ion spectra and spatial profiles qualitatively consistent with fast ion D-alpha [W. W. Heidbrink et al., Plasma Phys. Controlled Fusion 49, 1457 (2007)] spectroscopic data in both DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] and National Spherical Torus Experiment [M. Ono et al., Nucl. Fusion 41, 1435 (2001)] high harmonic ICRF heating experiments. This work verifies that both finite-orbit width effect of fast-ion due to its drift motion along the torus and iterations between fast-ion distribution and wave fields are important in modeling ICRF heating experiments.
Choi, M.; Green, David L; Heidbrink, W. W.; Harvey, R. W.; Liu, D.; Chan, V. S.; Berry, Lee A; Jaeger, Erwin Frederick; Lao, L.L.; Pinsker, R. I.; Podesta, M.; Smithe, D. N.; Park, J. M.; Bonoli, P.
2010-01-01
The five-dimensional finite-orbit Monte Carlo code ORBIT-RF [M. Choi , Phys. Plasmas 12, 1 (2005)] is successfully coupled with the two-dimensional full-wave code all-orders spectral algorithm (AORSA) [E. F. Jaeger , Phys. Plasmas 13, 056101 (2006)] in a self-consistent way to achieve improved predictive modeling for ion cyclotron resonance frequency (ICRF) wave heating experiments in present fusion devices and future ITER [R. Aymar , Nucl. Fusion 41, 1301 (2001)]. The ORBIT-RF/AORSA simulations reproduce fast-ion spectra and spatial profiles qualitatively consistent with fast ion D-alpha [W. W. Heidbrink , Plasma Phys. Controlled Fusion 49, 1457 (2007)] spectroscopic data in both DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] and National Spherical Torus Experiment [M. Ono , Nucl. Fusion 41, 1435 (2001)] high harmonic ICRF heating experiments. This work verifies that both finite-orbit width effect of fast-ion due to its drift motion along the torus and iterations between fast-ion distribution and wave fields are important in modeling ICRF heating experiments. (C) 2010 American Institute of Physics. [doi:10.1063/1.3314336
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy
Tennant, Sarah J.; Burgess, Donna R.; Rybak, Jeffrey M.; Martin, Craig A.; Burgess, David S.
2015-01-01
Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA) of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs) from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets. PMID:27025644
ERIC Educational Resources Information Center
Nylund, Karen L.; Asparouhov, Tihomir; Muthen, Bengt O.
2007-01-01
Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study…
NASA Astrophysics Data System (ADS)
Obot, I. B.; Kaya, Savaş; Kaya, Cemal; Tüzün, Burak
2016-06-01
DFT and Monte Carlo simulation were performed on three Schiff bases namely, 4-(4-bromophenyl)-N‧-(4-methoxybenzylidene)thiazole-2-carbohydrazide (BMTC), 4-(4-bromophenyl)-N‧-(2,4-dimethoxybenzylidene)thiazole-2-carbohydrazide (BDTC), 4-(4-bromophenyl)-N‧-(4-hydroxybenzylidene)thiazole-2-carbohydrazide (BHTC) recently studied as corrosion inhibitor for steel in acid medium. Electronic parameters relevant to their inhibition activity such as EHOMO, ELUMO, Energy gap (ΔE), hardness (η), softness (σ), the absolute electronegativity (χ), proton affinity (PA) and nucleophilicity (ω) etc., were computed and discussed. Monte Carlo simulations were applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are in most cases in agreement with experimental results.
A study on tetrahedron-based inhomogeneous Monte Carlo optical simulation.
Shen, Haiou; Wang, Ge
2010-12-03
Monte Carlo (MC) simulation is widely recognized as a gold standard in biophotonics for its high accuracy. Here we analyze several issues associated with tetrahedron-based optical Monte Carlo simulation in the context of TIM-OS, MMCM, MCML, and CUDAMCML in terms of accuracy and efficiency. Our results show that TIM-OS has significant better performance in the complex geometry cases and has comparable performance with CUDAMCML in the multi-layered tissue model.
A study on tetrahedron-based inhomogeneous Monte Carlo optical simulation
Shen, Haiou; Wang, Ge
2011-01-01
Monte Carlo (MC) simulation is widely recognized as a gold standard in biophotonics for its high accuracy. Here we analyze several issues associated with tetrahedron-based optical Monte Carlo simulation in the context of TIM-OS, MMCM, MCML, and CUDAMCML in terms of accuracy and efficiency. Our results show that TIM-OS has significant better performance in the complex geometry cases and has comparable performance with CUDAMCML in the multi-layered tissue model. PMID:21326634
Determining MTF of digital detector system with Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee
2005-04-01
We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2016-01-01
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
Monte Carlo simulations of nanoscale focused neon ion beam sputtering.
Timilsina, Rajendra; Rack, Philip D
2013-12-13
A Monte Carlo simulation is developed to model the physical sputtering of aluminum and tungsten emulating nanoscale focused helium and neon ion beam etching from the gas field ion microscope. Neon beams with different beam energies (0.5-30 keV) and a constant beam diameter (Gaussian with full-width-at-half-maximum of 1 nm) were simulated to elucidate the nanostructure evolution during the physical sputtering of nanoscale high aspect ratio features. The aspect ratio and sputter yield vary with the ion species and beam energy for a constant beam diameter and are related to the distribution of the nuclear energy loss. Neon ions have a larger sputter yield than the helium ions due to their larger mass and consequently larger nuclear energy loss relative to helium. Quantitative information such as the sputtering yields, the energy-dependent aspect ratios and resolution-limiting effects are discussed.
Residual entropy of ices and clathrates from Monte Carlo simulation
Kolafa, Jiří
2014-05-28
We calculated the residual entropy of ices (Ih, Ic, III, V, VI) and clathrates (I, II, H), assuming the same energy of all configurations satisfying the Bernal–Fowler ice rules. The Metropolis Monte Carlo simulations in the range of temperatures from infinity to a size-dependent threshold were followed by the thermodynamic integration. Convergence of the simulation and the finite-size effects were analyzed using the quasichemical approximation and the Debye–Hückel theory applied to the Bjerrum defects. The leading finite-size error terms, ln N/N, 1/N, and for the two-dimensional square ice model also 1/N{sup 3/2}, were used for an extrapolation to the thermodynamic limit. Finally, we discuss the influence of unequal energies of proton configurations.
Monte Carlo simulations of systems with complex energy landscapes
NASA Astrophysics Data System (ADS)
Wüst, T.; Landau, D. P.; Gervais, C.; Xu, Y.
2009-04-01
Non-traditional Monte Carlo simulations are a powerful approach to the study of systems with complex energy landscapes. After reviewing several of these specialized algorithms we shall describe the behavior of typical systems including spin glasses, lattice proteins, and models for "real" proteins. In the Edwards-Anderson spin glass it is now possible to produce probability distributions in the canonical ensemble and thermodynamic results of high numerical quality. In the hydrophobic-polar (HP) lattice protein model Wang-Landau sampling with an improved move set (pull-moves) produces results of very high quality. These can be compared with the results of other methods of statistical physics. A more realistic membrane protein model for Glycophorin A is also examined. Wang-Landau sampling allows the study of the dimerization process including an elucidation of the nature of the process.
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
Multicanonical Monte Carlo for Simulation of Optical Links
NASA Astrophysics Data System (ADS)
Bononi, Alberto; Rusch, Leslie A.
Multicanonical Monte Carlo (MMC) is a simulation-acceleration technique for the estimation of the statistical distribution of a desired system output variable, given the known distribution of the system input variables. MMC, similarly to the powerful and well-studied method of importance sampling (IS) [1], is a useful method to efficiently simulate events occurring with probabilities smaller than ˜ 10 - 6, such as bit error rate (BER) and system outage probability. Modern telecommunications systems often employ forward error correcting (FEC) codes that allow pre-decoded channel error rates higher than 10 - 3; these systems are well served by traditional Monte-Carlo error counting. MMC and IS are, nonetheless, fundamental tools to both understand the statistics of the decision variable (as well as of any physical parameter of interest) and to validate any analytical or semianalytical BER calculation model. Several examples of such use will be provided in this chapter. As a case in point, outage probabilities are routinely below 10 - 6, a sweet spot where MMC and IS provide the most efficient (sometimes the only) solution to estimate outages.
Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina
NASA Astrophysics Data System (ADS)
Chen, Xiaoyan; Lane, Stephen
2010-02-01
We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.
Leblanc, M D; Whitehead, J P; Plumer, M L
2013-05-15
A combination of Metropolis and modified Wolff cluster algorithms is used to examine the impact of uniaxial single-ion anisotropy on the phase transition to ferromagnetic order of Heisenberg macrospins on a 2D square lattice. This forms the basis of a model for granular perpendicular recording media where macrospins represent the magnetic moment of grains. The focus of this work is on the interplay between anisotropy D, intragrain exchange J' and intergrain exchange J on the ordering temperature T(C) and extends our previous reported analysis of the granular Ising model. The role of intragrain degrees of freedom in heat assisted magnetic recording is discussed.
NASA Astrophysics Data System (ADS)
Leblanc, M. D.; Whitehead, J. P.; Plumer, M. L.
2013-05-01
A combination of Metropolis and modified Wolff cluster algorithms is used to examine the impact of uniaxial single-ion anisotropy on the phase transition to ferromagnetic order of Heisenberg macrospins on a 2D square lattice. This forms the basis of a model for granular perpendicular recording media where macrospins represent the magnetic moment of grains. The focus of this work is on the interplay between anisotropy D, intragrain exchange J‧ and intergrain exchange J on the ordering temperature TC and extends our previous reported analysis of the granular Ising model. The role of intragrain degrees of freedom in heat assisted magnetic recording is discussed.
Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Campbell, L.; Purviance, J.
1992-01-01
A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.
ERIC Educational Resources Information Center
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Chan, C H; Rikvold, P A
2015-01-01
The Ziff-Gulari-Barshad (ZGB) model, a simplified description of the oxidation of carbon monoxide (CO) on a catalyst surface, is widely used to study properties of nonequilibrium phase transitions. In particular, it exhibits a nonequilibrium, discontinuous transition between a reactive and a CO poisoned phase. If one allows a nonzero rate of CO desorption (k), the line of phase transitions terminates at a critical point (k(c)). In this work, instead of restricting the CO and atomic oxygen (O) to react to form carbon dioxide (CO(2)) only when they are adsorbed in close proximity, we consider a modified model that includes an adjustable probability for adsorbed CO and O atoms located far apart on the lattice to react. We employ large-scale Monte Carlo simulations for system sizes up to 240×240 lattice sites, using the crossing of fourth-order cumulants to study the critical properties of this system. We find that the nonequilibrium critical point changes from the two-dimensional Ising universality class to the mean-field universality class upon introducing even a weak long-range reactivity mechanism. This conclusion is supported by measurements of cumulant fixed-point values, cluster percolation probabilities, correlation-length finite-size scaling properties, and the critical exponent ratio β/ν. The observed behavior is consistent with that of the equilibrium Ising ferromagnet with additional weak long-range interactions [T. Nakada, P. A. Rikvold, T. Mori, M. Nishino, and S. Miyashita, Phys. Rev. B 84, 054433 (2011)]. The large system sizes and the use of fourth-order cumulants also enable determination with improved accuracy of the critical point of the original ZGB model with CO desorption.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Rikvold, P. A.
2015-01-01
The Ziff-Gulari-Barshad (ZGB) model, a simplified description of the oxidation of carbon monoxide (CO) on a catalyst surface, is widely used to study properties of nonequilibrium phase transitions. In particular, it exhibits a nonequilibrium, discontinuous transition between a reactive and a CO poisoned phase. If one allows a nonzero rate of CO desorption (k ), the line of phase transitions terminates at a critical point (kc). In this work, instead of restricting the CO and atomic oxygen (O) to react to form carbon dioxide (CO2) only when they are adsorbed in close proximity, we consider a modified model that includes an adjustable probability for adsorbed CO and O atoms located far apart on the lattice to react. We employ large-scale Monte Carlo simulations for system sizes up to 240 ×240 lattice sites, using the crossing of fourth-order cumulants to study the critical properties of this system. We find that the nonequilibrium critical point changes from the two-dimensional Ising universality class to the mean-field universality class upon introducing even a weak long-range reactivity mechanism. This conclusion is supported by measurements of cumulant fixed-point values, cluster percolation probabilities, correlation-length finite-size scaling properties, and the critical exponent ratio β /ν . The observed behavior is consistent with that of the equilibrium Ising ferromagnet with additional weak long-range interactions [T. Nakada, P. A. Rikvold, T. Mori, M. Nishino, and S. Miyashita, Phys. Rev. B 84, 054433 (2011), 10.1103/PhysRevB.84.054433]. The large system sizes and the use of fourth-order cumulants also enable determination with improved accuracy of the critical point of the original ZGB model with CO desorption.
Lodise, Thomas P; Kinzig-Schippers, Martina; Drusano, George L; Loos, Ulrich; Vogel, Friedrich; Bulitta, Jürgen; Hinder, Markus; Sörgel, Fritz
2008-06-01
Cefditoren is a broad-spectrum, oral cephalosporin that is highly active against clinically relevant respiratory tract pathogens, including multidrug-resistant Streptococcus pneumoniae. This study described its pharmacodynamic profile in plasma and epithelial lining fluid (ELF). Plasma and ELF pharmacokinetic data were obtained from 24 patients under fasting conditions. Cefditoren and urea concentrations were determined in plasma and bronchoalveolar lavage fluid by liquid chromatography-tandem mass spectrometry. Concentration-time profiles in plasma and ELF were modeled using a model with three disposition compartments and first-order absorption, elimination, and transfer. Pharmacokinetic parameters were identified in a population pharmacokinetic analysis (big nonparametric adaptive grid with adaptive gamma). Monte Carlo simulation (9,999 subjects) was performed with the ADAPT II program to estimate the probability of target attainment at which the free-cefditoren plasma concentrations (88%) protein binding and total ELF concentrations exceeded the MIC for 33% of the dosing interval for 400 mg cefditoren given orally every 12 h. After the Bayesian step, the overall fits of the model to the data were good, and plots of predicted versus observed concentrations for plasma and ELF showed slopes and intercepts very close to the ideal values of 1.0 and 0.0, respectively. In the plasma probability of target attainment analysis, the probability of achieving a time for which free, or unbound, plasma concentration exceeds the MIC of the organism for 33% of the dosing interval was <80% for a MIC of >0.06 mg/liter. Similar to plasma, the probability of achieving a time above the MIC of 33% was <80% for MIC of >0.06 mg/liter in ELF. Cefditoren was found to have a low probability of achieving a bacteriostatic effect against MICs of >0.06 mg/liter, which includes most S. pneumoniae isolates with intermediate susceptibility to penicillin, when given in the fasting state in both
Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.
ERIC Educational Resources Information Center
Oulman, Charles S.; Lee, Motoko Y.
Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…
Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model
D.P. Stotler
2005-06-09
The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.
A Monte Carlo investigation of the Hamiltonian mean field model
NASA Astrophysics Data System (ADS)
Pluchino, Alessandro; Andronico, Giuseppe; Rapisarda, Andrea
2005-04-01
We present a Monte Carlo numerical investigation of the Hamiltonian mean field (HMF) model. We begin by discussing canonical Metropolis Monte Carlo calculations, in order to check the caloric curve of the HMF model and study finite size effects. In the second part of the paper, we present numerical simulations obtained by means of a modified Monte Carlo procedure with the aim to test the stability of those states at minimum temperature and zero magnetization (homogeneous Quasi stationary states), which exist in the condensed phase of the model just below the critical point. For energy densities smaller than the limiting value U∼0.68, we find that these states are unstable confirming a recent result on the Vlasov stability analysis applied to the HMF model.
Monte Carlo Simulation of Response Time for Velocity Modulation Transistors
NASA Astrophysics Data System (ADS)
Maezawa, Koichi; Mizutani, Takashi; Tomizawa, Masaaki
1992-03-01
We have studied the response time for velocity modulation transistors (VMTs) using particle Monte Carlo simulation. The intrinsic VMT model with zero gate-source spacing was used to avoid the change in total number of electrons due to the difference in source resistances between the two channels. The results show that the response time for VMTs is about half that for ordinary high electron mobility transistors (HEMTs). The remaining factor limiting the response time is the electron redistribution in the channel, which is shown to be caused by the difference in velocity-electric field characteristics in the two channels. A “virtual” VMT model with a single channel, where the impurity concentration is changed abruptly at a certain moment, has also been studied to clarify the effect of electron redistribution.
Residual entropy of ice III from Monte Carlo simulation.
Kolafa, Jiří
2016-03-28
We calculated the residual entropy of ice III as a function of the occupation probabilities of hydrogen positions α and β assuming equal energies of all configurations. To do this, a discrete ice model with Bjerrum defect energy penalty and harmonic terms to constrain the occupation probabilities was simulated by the Metropolis Monte Carlo method for a range of temperatures and sizes followed by thermodynamic integration and extrapolation to N = ∞. Similarly as for other ices, the residual entropies are slightly higher than the mean-field (no-loop) approximation. However, the corrections caused by fluctuation of energies of ice samples calculated using molecular models of water are too large for accurate determination of the chemical potential and phase equilibria.
Monte Carlo simulation and dosimetric verification of radiotherapy beam modifiers
NASA Astrophysics Data System (ADS)
Spezi, E.; Lewis, D. G.; Smith, C. W.
2001-11-01
Monte Carlo simulation of beam modifiers such as physical wedges and compensating filters has been performed with a rectilinear voxel geometry module. A modified version of the EGS4/DOSXYZ code has been developed for this purpose. The new implementations have been validated against the BEAM Monte Carlo code using its standard component modules (CMs) in several geometrical conditions. No significant disagreements were found within the statistical errors of 0.5% for photons and 2% for electrons. The clinical applicability and flexibility of the new version of the code has been assessed through an extensive verification versus dosimetric data. Both Varian multi-leaf collimator (MLC) wedges and standard wedges have been simulated and compared against experiments for 6 MV photon beams and different field sizes. Good agreement was found between calculated and measured depth doses and lateral dose profiles along both wedged and unwedged directions for different depths and focus-to-surface distances. Furthermore, Monte Carlo-generated output factors for both open and wedged fields agreed with linac commissioning beam data within statistical uncertainties of the calculations (<3% at largest depths). Compensating filters of both low-density and high-density materials have also been successfully simulated. As a demonstration, a wax compensating filter with a complex three-dimensional concave and convex geometry has been modelled through a CT scan import. Calculated depth doses and lateral dose profiles for different field sizes agreed well with experiments. The code was used to investigate the performance of a commercial treatment planning system in designing compensators. Dose distributions in a heterogeneous water phantom emulating the head and neck region were calculated with the convolution-superposition method (pencil beam and collapsed cone implementations) and compared against those from the MC code developed herein. The new technique presented in this work is
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway.
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition
Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.
2008-05-01
A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient.
Monte Carlo simulation in statistical physics: an introduction
NASA Astrophysics Data System (ADS)
Binder, K., Heermann, D. W.
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.
Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.
2014-03-01
Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.
NOTE: Monte Carlo simulation of RapidArc radiotherapy delivery
NASA Astrophysics Data System (ADS)
Bush, K.; Townson, R.; Zavgorodni, S.
2008-10-01
RapidArc radiotherapy technology from Varian Medical Systems is one of the most complex delivery systems currently available, and achieves an entire intensity-modulated radiation therapy (IMRT) treatment in a single gantry rotation about the patient. Three dynamic parameters can be continuously varied to create IMRT dose distributions—the speed of rotation, beam shaping aperture and delivery dose rate. Modeling of RapidArc technology was incorporated within the existing Vancouver Island Monte Carlo (VIMC) system (Zavgorodni et al 2007 Radiother. Oncol. 84 S49, 2008 Proc. 16th Int. Conf. on Medical Physics). This process was named VIMC-Arc and has become an efficient framework for the verification of RapidArc treatment plans. VIMC-Arc is a fully automated system that constructs the Monte Carlo (MC) beam and patient models from a standard RapidArc DICOM dataset, simulates radiation transport, collects the resulting dose and converts the dose into DICOM format for import back into the treatment planning system (TPS). VIMC-Arc accommodates multiple arc IMRT deliveries and models gantry rotation as a series of segments with dynamic MLC motion within each segment. Several verification RapidArc plans were generated by the Eclipse TPS on a water-equivalent cylindrical phantom and re-calculated using VIMC-Arc. This includes one 'typical' RapidArc plan, one plan for dual arc treatment and one plan with 'avoidance' sectors. One RapidArc plan was also calculated on a DICOM patient CT dataset. Statistical uncertainty of MC simulations was kept within 1%. VIMC-Arc produced dose distributions that matched very closely to those calculated by the anisotropic analytical algorithm (AAA) that is used in Eclipse. All plans also demonstrated better than 1% agreement of the dose at the isocenter. This demonstrates the capabilities of our new MC system to model all dosimetric features required for RapidArc dose calculations.
Monte Carlo simulation of RapidArc radiotherapy delivery.
Bush, K; Townson, R; Zavgorodni, S
2008-10-07
RapidArc radiotherapy technology from Varian Medical Systems is one of the most complex delivery systems currently available, and achieves an entire intensity-modulated radiation therapy (IMRT) treatment in a single gantry rotation about the patient. Three dynamic parameters can be continuously varied to create IMRT dose distributions-the speed of rotation, beam shaping aperture and delivery dose rate. Modeling of RapidArc technology was incorporated within the existing Vancouver Island Monte Carlo (VIMC) system (Zavgorodni et al 2007 Radiother. Oncol. 84 S49, 2008 Proc. 16th Int. Conf. on Medical Physics). This process was named VIMC-Arc and has become an efficient framework for the verification of RapidArc treatment plans. VIMC-Arc is a fully automated system that constructs the Monte Carlo (MC) beam and patient models from a standard RapidArc DICOM dataset, simulates radiation transport, collects the resulting dose and converts the dose into DICOM format for import back into the treatment planning system (TPS). VIMC-Arc accommodates multiple arc IMRT deliveries and models gantry rotation as a series of segments with dynamic MLC motion within each segment. Several verification RapidArc plans were generated by the Eclipse TPS on a water-equivalent cylindrical phantom and re-calculated using VIMC-Arc. This includes one 'typical' RapidArc plan, one plan for dual arc treatment and one plan with 'avoidance' sectors. One RapidArc plan was also calculated on a DICOM patient CT dataset. Statistical uncertainty of MC simulations was kept within 1%. VIMC-Arc produced dose distributions that matched very closely to those calculated by the anisotropic analytical algorithm (AAA) that is used in Eclipse. All plans also demonstrated better than 1% agreement of the dose at the isocenter. This demonstrates the capabilities of our new MC system to model all dosimetric features required for RapidArc dose calculations.
Monte Carlo simulation of the spear reflectometer at LANSCE
Smith, G.S.
1995-12-31
The Monte Carlo instrument simulation code, MCLIB, contains elements to represent several components found in neutron spectrometers including slits, choppers, detectors, sources and various samples. Using these elements to represent the components of a neutron scattering instrument, one can simulate, for example, an inelastic spectrometer, a small angle scattering machine, or a reflectometer. In order to benchmark the code, we chose to compare simulated data from the MCLIB code with an actual experiment performed on the SPEAR reflectometer at LANSCE. This was done by first fitting an actual SPEAR data set to obtain the model scattering-length-density profile, {Beta}(z), for the sample and the substrate. Then these parameters were used as input values for the sample scattering function. A simplified model of SPEAR was chosen which contained all of the essential components of the instrument. A code containing the MCLIB subroutines was then written to simulate this simplified instrument. The resulting data was then fit and compared to the actual data set in terms of the statistics, resolution and accuracy.
NASA Astrophysics Data System (ADS)
Sharma, Anupam; Long, Lyle N.
2004-10-01
A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.
Monte Carlo simulation of radiation streaming from a radioactive material shipping cask
Liu, Y.Y.; Schwarz, R.A.; Tang, J.S.
1996-04-01
Simulated detection of gamma radiation streaming from a radioactive material shipping cask have been performed with the Monte Carlo codes MCNP4A and MORSE-SGC/S. Despite inherent difficulties in simulating deep penetration of radiation and streaming, the simulations have yielded results that agree within one order of magnitude with the radiation survey data, with reasonable statistics. These simulations have also provided insight into modeling radiation detection, notably on location and orientation of the radiation detector with respect to photon streaming paths, and on techniques used to reduce variance in the Monte Carlo calculations. 13 refs., 4 figs., 2 tabs.
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
Lindoy, Lachlan P.; Kolmann, Stephen J.; D’Arcy, Jordan H.; Jordan, Meredith J. T.; Crittenden, Deborah L.
2015-11-21
Finite temperature quantum and anharmonic effects are studied in H{sub 2}–Li{sup +}-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H{sub 2}. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H{sub 2} molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔU{sub ads}, and enthalpy, ΔH{sub ads}, for H{sub 2} adsorption onto Li{sup +}-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H{sub 2}–Li{sup +}-benzene are the “helicopter” and “ferris wheel” H{sub 2} rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔU{sub ads} and ΔH{sub ads} are −13.3 ± 0.1 and −14.5 ± 0.1 kJ mol{sup −1}, respectively.
Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark
2012-02-01
We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.
NASA Astrophysics Data System (ADS)
Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.
2016-03-01
Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.
Treatment planning for a small animal using Monte Carlo simulation
Chow, James C. L.; Leung, Michael K. K.
2007-12-15
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Treatment planning for a small animal using Monte Carlo simulation.
Chow, James C L; Leung, Michael K K
2007-12-01
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Monte Carlo Simulations of Background Spectra in Integral Imager Detectors
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.
1998-01-01
Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.
Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.
Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G
2010-04-16
We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.
Monte Carlo simulation of laser backscatter from sea water
NASA Astrophysics Data System (ADS)
Koerber, B. W.; Phillips, D. M.
1982-01-01
A Monte Carlo simulation study of laser backscatter from sea water has been carried out to provide data required to assess the feasibility of measuring inherent optical propagation properties of sea water from an aircraft. The possibility was examined of deriving such information from the backscatter component of the return signals measured by the WRELADS laser airborne depth sounder system. Computations were made for various water turbidity conditions and for different fields of view of the WRELADS receiver. Using a simple model fitted to the computed backscatter data, it was shown that values of the scattering data absorption coefficients can be derived from the initial amplitude and the decay rate of the backscatter envelope.
Magnetic properties for cobalt nanorings: Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Ye, Qingying; Chen, Shuiyuan; Zhong, Kehua; Huang, Zhigao
2012-02-01
In this paper, two structure models of cobalt nanoring cells (double-nanorings and four-nanorings, named as D-rings and F-rings, respectively) have been considered. Base on Monte Carlo simulation, the magnetic properties of the D-rings and F-rings, such as hysteresis loops, spin configuration, coercivity, etc., have been studied. The simulated results indicate that both D-rings and F-rings with different inner radius ( r) and separation of ring centers ( d) display interesting magnetization behavior and spin configurations (onion-, vortex- and crescent shape vortex-type states) in magnetization process. Moreover, it is found that the overlap between the nearest single nanorings connect can result in the deviation of the vortex-type states in the connected regions. Therefore, the appropriate d should be well considered in the design of nanoring device. The simulated results can be explained by the competition between exchange energy and dipolar energy in Co nanorings system. Furthermore, it is found that the simulated temperature dependence of the coercivity for the D-rings with different d can be well described by Hc= H0 exp[-( T/ T0) p].
Learning About Ares I from Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Hanson, John M.; Hall, Charlie E.
2008-01-01
This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.
Yeh, Chun-Hung; Schmitt, Benoît; Le Bihan, Denis; Li-Schlittgen, Jing-Rebecca; Lin, Ching-Po; Poupon, Cyril
2013-01-01
This article describes the development and application of an integrated, generalized, and efficient Monte Carlo simulation system for diffusion magnetic resonance imaging (dMRI), named Diffusion Microscopist Simulator (DMS). DMS comprises a random walk Monte Carlo simulator and an MR image synthesizer. The former has the capacity to perform large-scale simulations of Brownian dynamics in the virtual environments of neural tissues at various levels of complexity, and the latter is flexible enough to synthesize dMRI datasets from a variety of simulated MRI pulse sequences. The aims of DMS are to give insights into the link between the fundamental diffusion process in biological tissues and the features observed in dMRI, as well as to provide appropriate ground-truth information for the development, optimization, and validation of dMRI acquisition schemes for different applications. The validity, efficiency, and potential applications of DMS are evaluated through four benchmark experiments, including the simulated dMRI of white matter fibers, the multiple scattering diffusion imaging, the biophysical modeling of polar cell membranes, and the high angular resolution diffusion imaging and fiber tractography of complex fiber configurations. We expect that this novel software tool would be substantially advantageous to clarify the interrelationship between dMRI and the microscopic characteristics of brain tissues, and to advance the biophysical modeling and the dMRI methodologies.
Progress report for the Monte-Carlo gamma-ray spectrum simulation program BSIMUL
NASA Technical Reports Server (NTRS)
Haywood, S. E.; Rester, A. C., Jr.
1996-01-01
The progress made during 1995 on the Monte-Carlo gamma-ray spectrum simulation program BSIMUL is discussed. Several features have been added, including the ability to model shields that are tapered cylinders. Several simulations were made on the Near Earth Asteroid Rendezvous detector.
Relation between gamma-ray family and EAS core: Monte-Carlo simulation of EAS core
NASA Technical Reports Server (NTRS)
Yanagita, T.
1985-01-01
Preliminary results of Monte-Carlo simulation on Extensive Air Showers (EAS) (Ne=100,000) core is reported. For the first collision at the top of the atmosphere, high multiplicity (high rapidity, density) and a large Pt (1.5GeV average) model is assumed. Most of the simulated cores show a complicated structure.
Parallel Monte Carlo Simulation for control system design
NASA Technical Reports Server (NTRS)
Schubert, Wolfgang M.
1995-01-01
The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.
Liu, Zhirong; Chan, Hue Sun
2008-04-14
We develop two classes of Monte Carlo moves for efficient sampling of wormlike DNA chains that can have significant degrees of supercoiling, a conformational feature that is key to many aspects of biological function including replication, transcription, and recombination. One class of moves entails reversing the coordinates of a segment of the chain along one, two, or three axes of an appropriately chosen local frame of reference. These transformations may be viewed as a generalization, to the continuum, of the Madras-Orlitsky-Shepp algorithm for cubic lattices. Another class of moves, termed T+/-2, allows for interconversions between chains with different lengths by adding or subtracting two beads (monomer units) to or from the chain. Length-changing moves are generally useful for conformational sampling with a given site juxtaposition, as has been shown in previous lattice studies. Here, the continuum T+/-2 moves are designed to enhance their acceptance rate in supercoiled conformations. We apply these moves to a wormlike model in which excluded volume is accounted for by a bond-bond repulsion term. The computed autocorrelation functions for the relaxation of bond length, bond angle, writhe, and branch number indicate that the new moves lead to significantly more efficient sampling than conventional bead displacements and crankshaft rotations. A close correspondence is found in the equilibrium ensemble between the map of writhe computed for pair of chain segments and the map of site juxtapositions or self-contacts. To evaluate the more coarse-grained freely jointed chain (random-flight) and cubic lattice models that are commonly used in DNA investigations, twisting (torsional) potentials are introduced into these models. Conformational properties for a given superhelical density sigma may then be sampled by computing the writhe and using White's formula to relate the degree of twisting to writhe and sigma. Extensive comparisons of contact patterns and knot
Catfish: A Monte Carlo simulator for black holes at the LHC
NASA Astrophysics Data System (ADS)
Cavaglià, M.; Godang, R.; Cremaldi, L.; Summers, D.
2007-09-01
We present a new Fortran Monte Carlo generator to simulate black hole events at CERN's Large Hadron Collider. The generator interfaces to the PYTHIA Monte Carlo fragmentation code. The physics of the BH generator includes, but not limited to, inelasticity effects, exact field emissivities, corrections to semiclassical black hole evaporation and gravitational energy loss at formation. These features are essential to realistically reconstruct the detector response and test different models of black hole formation and decay at the LHC.
Monte Carlo Simulation of Callisto's Exosphere
NASA Astrophysics Data System (ADS)
Vorburger, Audrey; Wurz, Peter; Galli, André; Mousis, Olivier; Barabash, Stas; Lammer, Helmut
2014-05-01
to the surface the sublimated particles dominate the day-side exosphere, however, their density profiles (with the exception of H and H2) decrease much more rapidly with altitude than those of the sputtered particles, thus, the latter particles start to dominate at altitudes above ~1000 km. Since the JUICE flybys are as low as 200 km above Callisto's surface, NIM is expected to register both the sublimated as well as sputtered particle populations. Our simulations show that NIM's sensitivity is high enough to allow the detection of particles sputtered from the icy as well as the mineral surfaces, and to distinguish between the different composition models.
Cai, Zhongli; Chattopadhyay, Niladri; Kwon, Yongkyu Luke; Pignol, Jean-Philippe; Lechtman, Eli; Reilly, Raymond M.
2013-11-15
Purpose: The authors’ aims were to model how various factors influence radiation dose enhancement by gold nanoparticles (AuNPs) and to propose a new modeling approach to the dose enhancement factor (DEF).Methods: The authors used Monte Carlo N-particle (MCNP 5) computer code to simulate photon and electron transport in cells. The authors modeled human breast cancer cells as a single cell, a monolayer, or a cluster of cells. Different numbers of 5, 30, or 50 nm AuNPs were placed in the extracellular space, on the cell surface, in the cytoplasm, or in the nucleus. Photon sources examined in the simulation included nine monoenergetic x-rays (10–100 keV), an x-ray beam (100 kVp), and {sup 125}I and {sup 103}Pd brachytherapy seeds. Both nuclear and cellular dose enhancement factors (NDEFs, CDEFs) were calculated. The ability of these metrics to predict the experimental DEF based on the clonogenic survival of MDA-MB-361 human breast cancer cells exposed to AuNPs and x-rays were compared.Results: NDEFs show a strong dependence on photon energies with peaks at 15, 30/40, and 90 keV. Cell model and subcellular location of AuNPs influence the peak position and value of NDEF. NDEFs decrease in the order of AuNPs in the nucleus, cytoplasm, cell membrane, and extracellular space. NDEFs also decrease in the order of AuNPs in a cell cluster, monolayer, and single cell if the photon energy is larger than 20 keV. NDEFs depend linearly on the number of AuNPs per cell. Similar trends were observed for CDEFs. NDEFs using the monolayer cell model were more predictive than either single cell or cluster cell models of the DEFs experimentally derived from the clonogenic survival of cells cultured as a monolayer. The amount of AuNPs required to double the prescribed dose in terms of mg Au/g tissue decreases as the size of AuNPs increases, especially when AuNPs are in the nucleus and the cytoplasm. For 40 keV x-rays and a cluster of cells, to double the prescribed x-ray dose (NDEF = 2
An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media
NASA Astrophysics Data System (ADS)
Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu
2016-03-01
Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Parallel canonical Monte Carlo simulations through sequential updating of particles
NASA Astrophysics Data System (ADS)
O'Keeffe, C. J.; Orkoulas, G.
2009-04-01
In canonical Monte Carlo simulations, sequential updating of particles is equivalent to random updating due to particle indistinguishability. In contrast, in grand canonical Monte Carlo simulations, sequential implementation of the particle transfer steps in a dense grid of distinct points in space improves both the serial and the parallel efficiency of the simulation. The main advantage of sequential updating in parallel canonical Monte Carlo simulations is the reduction in interprocessor communication, which is usually a slow process. In this work, we propose a parallelization method for canonical Monte Carlo simulations via domain decomposition techniques and sequential updating of particles. Each domain is further divided into a middle and two outer sections. Information exchange is required after the completion of the updating of the outer regions. During the updating of the middle section, communication does not occur unless a particle moves out of this section. Results on two- and three-dimensional Lennard-Jones fluids indicate a nearly perfect improvement in parallel efficiency for large systems.
Parallel canonical Monte Carlo simulations through sequential updating of particles.
O'Keeffe, C J; Orkoulas, G
2009-04-07
In canonical Monte Carlo simulations, sequential updating of particles is equivalent to random updating due to particle indistinguishability. In contrast, in grand canonical Monte Carlo simulations, sequential implementation of the particle transfer steps in a dense grid of distinct points in space improves both the serial and the parallel efficiency of the simulation. The main advantage of sequential updating in parallel canonical Monte Carlo simulations is the reduction in interprocessor communication, which is usually a slow process. In this work, we propose a parallelization method for canonical Monte Carlo simulations via domain decomposition techniques and sequential updating of particles. Each domain is further divided into a middle and two outer sections. Information exchange is required after the completion of the updating of the outer regions. During the updating of the middle section, communication does not occur unless a particle moves out of this section. Results on two- and three-dimensional Lennard-Jones fluids indicate a nearly perfect improvement in parallel efficiency for large systems.
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
Fast Off-Lattice Monte Carlo Simulations with Soft Potentials
NASA Astrophysics Data System (ADS)
Zong, Jing; Yang, Delian; Yin, Yuhua; Zhang, Xinghua; Wang, Qiang (David)
2011-03-01
Fast off-lattice Monte Carlo simulations with soft repulsive potentials that allow particle overlapping give orders of magnitude faster/better sampling of the configurational space than conventional molecular simulations with hard-core repulsions (such as the hard-sphere or Lennard-Jones repulsion). Here we present our fast off-lattice Monte Carlo simulations ranging from small-molecule soft spheres and liquid crystals to polymeric systems including homopolymers and rod-coil diblock copolymers. The simulation results are compared with various theories based on the same Hamiltonian as in the simulations (thus without any parameter-fitting) to quantitatively reveal the consequences of approximations in these theories. Q. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009).
Composite system reliability evaluation using sequential Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jonnavithula, Annapoorani
Monte Carlo simulation methods can be effectively used to assess the adequacy of composite power system networks. The sequential simulation approach is the most fundamental technique available and can be used to provide a wide range of indices. It can also be used to provide estimates which can serve as benchmarks against which other approximate techniques can be compared. The focus of this research work is on the reliability evaluation of composite generation and transmission systems with special reference to frequency and duration related indices and estimated power interruption costs at each load bus. One of the main objectives is to use the sequential simulation method to create a comprehensive technique for composite system adequacy evaluation. This thesis recognizes the need for an accurate representation of the load model at the load buses which depends on the mix of customer sectors at each bus. Chronological hourly load curves are developed in this thesis, recognizing the individual load profiles of the customers at each load bus. Reliability worth considerations are playing an ever increasing role in power system planning and operation. Different methods for bus outage cost evaluation are proposed in this thesis. It may not be computationally feasible to use the sequential simulation method with time varying loads at each bus in large electric power system networks. Time varying load data may also not be available at each bus. This research work uses the sequential methodology as a fundamental technique to calibrate other non sequential methods such as the state sampling and state transition sampling techniques. Variance reduction techniques that improve the efficiency of the sequential simulation procedure are investigated as a part of this research work. Pertinent features that influence reliability worth assessment are also incorporated. All the proposed methods in this thesis are illustrated by application to two reliability test systems. In addition
Monte Carlo simulations of ionization potential depression in dense plasmas
Stransky, M.
2016-01-15
A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.
Monte Carlo simulations of ionization potential depression in dense plasmas
NASA Astrophysics Data System (ADS)
Stransky, M.
2016-01-01
A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.
NASA Astrophysics Data System (ADS)
Nagata, Minori; Nagata, Hiroyasu
This article is a continuance of [9] that study brings into focus on the n species systems food chain. Computer simulation is important today, and its results may be reflected to policy. Now, we change the mortality rate of the bottom-prey, in order to inspect the survivor region of the top-predator that is crucial for the conservation of the ecosystem. We carry out Monte-Carlo simulations on finite-size lattices composed of the species . The bottom-prey mortality rate is changed from 0 to the extinction value. Thereafter, we find the steady state densities against the species n and plot the predator survivor region. To realize the conservation of the top-predator population, substantial amount of hardship is anticipated, because the bottom-prey density gradually becomes a little.
Yamada, Masako; Butts, Matthew D; Kalla, Karen K
2005-01-01
We show the results of Mie-scattering Monte Carlo models developed to simulate the optical properties of light incident on particle-containing coatings. The model accommodates mixtures of particles with different sizes and complex refractive indices, enabling the simulation of formulations, including pigments. The simulation tracks trajectories of photons as they propagate through the turbid medium, calculating both angular and spatial light intensity distributions. Scalar quantities such as total transmission and reflection, and haze and diffuse reflectance, are also calculated.
Monte Carlo code for high spatial resolution ocean color simulations.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito; Cunha, José C
2010-09-10
A Monte Carlo code for ocean color simulations has been developed to model in-water radiometric fields of downward and upward irradiance (E(d) and E(u)), and upwelling radiance (L(u)) in a two-dimensional domain with a high spatial resolution. The efficiency of the code has been optimized by applying state-of-the-art computing solutions, while the accuracy of simulation results has been quantified through benchmark with the widely used Hydrolight code for various values of seawater inherent optical properties and different illumination conditions. Considering a seawater single scattering albedo of 0.9, as well as surface waves of 5 m width and 0.5 m height, the study has shown that the number of photons required to quantify uncertainties induced by wave focusing effects on E(d), E(u), and L(u) data products is of the order of 10(6), 10(9), and 10(10), respectively. On this basis, the effects of sea-surface geometries on radiometric quantities have been investigated for different surface gravity waves. Data products from simulated radiometric profiles have finally been analyzed as a function of the deployment speed and sampling frequency of current free-fall systems in view of providing recommendations to improve measurement protocols.
Monte Carlo Simulation of rainfall hyetographs for analysis and design
NASA Astrophysics Data System (ADS)
Kottegoda, N. T.; Natale, L.; Raiteri, E.
2014-11-01
Observations of high intensity rainfalls have been recorded at gauging stations in many parts of the world. In some instances the resulting data sets may not be sufficient in their scope and variability for purposes of analysis or design. By directly incorporating statistical properties of hyetographs with respect to the number of events per year, storm duration, peak intensity, cumulative rainfall and rising and falling limbs we develop a fundamentally basic procedure for Monte Carlo Simulation. Rainfall from Pavia and Milano in Lombardia region and from five gauging stations in the Piemonte region of northern Italy are used in this study. Firstly, we compare the hydrologic output from our model with that from other design storm methods for validation. Secondly, depth-duration-frequency curves are obtained from historical data and corresponding functions from simulated data are compared for further validation of the procedure. By adopting this original procedure one can simulate an unlimited range of realistic hydrographs that can be used in risk assessment. The potential for extension to ungauged catchments is shown.
Monte Carlo Simulation of Solar Reflectances for Cloudy Atmospheres.
NASA Astrophysics Data System (ADS)
Barker, H. W.; Goldstein, R. K.; Stevens, D. E.
2003-08-01
Monte Carlo simulations of solar radiative transfer were performed for a well-resolved, large, three-dimensional (3D) domain of boundary layer cloud simulated by a cloud-resolving model. In order to represent 3D distributions of optical properties for 2 × 106 cloudy cells, attenuation by droplets was handled by assigning each cell a cumulative distribution of extinction derived from either a model or an assumed discrete droplet size spectrum. This minimizes the required number of detailed phase functions. Likewise, to simulate statistically significant, high-resolution imagery, it was necessary to apply variance reduction techniques. Three techniques were developed for use with the local estimation method of computing reflectance . First, small fractions of come from numerous, small contributions of computed at each scattering event. Terminating calculation of when it falls below min 103 was found to impact estimates of minimally but reduced computation time by 10%. Second, large fractions of come from infrequent realizations of large . When sampled poorly, they boost Monte Carlo noise significantly. Removing max, storing them in a domainwide reservoir, adding max to local estimates of , and, at simulation's end, distributing the reservoir across the domain in proportion to local , tends to reduce variance much. This regionalization technique works well when the number of photons per unit area is small (nominally 50 000). A value of max 100 reduces variance of greatly with little impact on estimates of . Third, if
A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems
NASA Astrophysics Data System (ADS)
Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.
2001-06-01
We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.
Monte Carlo simulation by computer for life-cycle costing
NASA Technical Reports Server (NTRS)
Gralow, F. H.; Larson, W. J.
1969-01-01
Prediction of behavior and support requirements during the entire life cycle of a system enables accurate cost estimates by using the Monte Carlo simulation by computer. The system reduces the ultimate cost to the procuring agency because it takes into consideration the costs of initial procurement, operation, and maintenance.
Quantum Monte Carlo simulation of topological phase transitions
NASA Astrophysics Data System (ADS)
Yamamoto, Arata; Kimura, Taro
2016-12-01
We study the electron-electron interaction effects on topological phase transitions by the ab initio quantum Monte Carlo simulation. We analyze two-dimensional class A topological insulators and three-dimensional Weyl semimetals with the long-range Coulomb interaction. The direct computation of the Chern number shows the electron-electron interaction modifies or extinguishes topological phase transitions.
A Monte Carlo simulation of a supersaturated sodium chloride solution
NASA Astrophysics Data System (ADS)
Schwendinger, Michael G.; Rode, Bernd M.
1989-03-01
A simulation of a supersaturated sodium chloride solution with the Monte Carlo statistical thermodynamic method is reported. The water-water interactions are described by the Matsuoka-Clementi-Yoshimine (MCY) potential, while the ion-water potentials have been derived from ab initio calculations. Structural features of the solution have been evaluated, special interest being focused on possible precursors of nucleation.
Play It Again: Teaching Statistics with Monte Carlo Simulation
ERIC Educational Resources Information Center
Sigal, Matthew J.; Chalmers, R. Philip
2016-01-01
Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…
Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation
ERIC Educational Resources Information Center
Silver, N. Clayton; Hittner, James B.; May, Kim
2004-01-01
The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…
Monte Carlo ICRH simulations in fully shaped anisotropic plasmas
Jucker, M.; Graves, J. P.; Cooper, W. A.; Mellet, N.; Brunner, S.
2008-11-01
In order to numerically study the effects of Ion Cyclotron Resonant Heating (ICRH) on the fast particle distribution function in general plasma geometries, three codes have been coupled: VMEC generates a general (2D or 3D) MHD equilibrium including full shaping and pressure anisotropy. This equilibrium is then mapped into Boozer coordinates. The full-wave code LEMan then calculates the power deposition and electromagnetic field strength of a wave field generated by a chosen antenna using a warm model. Finally, the single particle Hamiltonian code VENUS combines the outputs of the two previous codes in order to calculate the evolution of the distribution function. Within VENUS, Monte Carlo operators for Coulomb collisions of the fast particles with the background plasma have been implemented, accounting for pitch angle and energy scattering. Also, ICRH is simulated using Monte Carlo operators on the Doppler shifted resonant layer. The latter operators act in velocity space and induce a change of perpendicular and parallel velocity depending on the electric field strength and the corresponding wave vector. Eventually, the change in the distribution function will then be fed into VMEC for generating a new equilibrium and thus a self-consistent solution can be found. This model is an enhancement of previous studies in that it is able to include full 3D effects such as magnetic ripple, treat the effects of non-zero orbit width consistently and include the generation and effects of pressure anisotropy. Here, first results of coupling the three codes will be shown in 2D tokamak geometries.
Monte Carlo modeling of coherent scattering: Influence of interference
Leliveld, C.J.; Maas, J.G.; Bom, V.R.; Eijk, C.W.E. van
1996-12-01
In this study, the authors present Monte Carlo (MC) simulation results for the intensity and angular distribution of scattered radiation from cylindrical absorbers. For coherent scattering the authors have taken into account the effects of interference by using new molecular form factor data for the AAPM plastic materials and water. The form factor data were compiled from X-ray diffraction measurements. The new data have been implemented in the authors` Electron Gamma Shower (EGS4) Monte Carlo system. The hybrid MC simulation results show a significant influence on the intensity and the angular distribution of coherently scattered photons. They conclude that MC calculations are significantly in error when interference effects are ignored in the model for coherent scattering. Especially for simulation studies of scattered radiation in collimated geometries, where small angle scattering will prevail, the coherent scatter contribution is highly overestimated when conventional form factor data are used.
Implementation of Monte Carlo Simulations for the Gamma Knife System
NASA Astrophysics Data System (ADS)
Xiong, W.; Huang, D.; Lee, L.; Feng, J.; Morris, K.; Calugaru, E.; Burman, C.; Li, J.; Ma, C.-M.
2007-06-01
Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 ± 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Monte Carlo simulation of virtual Compton scattering below pion threshold
NASA Astrophysics Data System (ADS)
Janssens, P.; Van Hoorebeke, L.; Fonvieille, H.; D'Hose, N.; Bertin, P. Y.; Bensafa, I.; Degrande, N.; Distler, M.; Di Salvo, R.; Doria, L.; Friedrich, J. M.; Friedrich, J.; Hyde-Wright, Ch.; Jaminion, S.; Kerhoas, S.; Laveissière, G.; Lhuillier, D.; Marchand, D.; Merkel, H.; Roche, J.; Tamas, G.; Vanderhaeghen, M.; Van de Vyver, R.; Van de Wiele, J.; Walcher, Th.
2006-10-01
This paper describes the Monte Carlo simulation developed specifically for the Virtual Compton Scattering (VCS) experiments below pion threshold that have been performed at MAMI and JLab. This simulation generates events according to the (Bethe-Heitler + Born) cross-section behaviour and takes into account all relevant resolution-deteriorating effects. It determines the "effective" solid angle for the various experimental settings which are used for the precise determination of the photon electroproduction absolute cross-section.
Direct Monte Carlo Simulations of Hypersonic Viscous Interactions Including Separation
NASA Technical Reports Server (NTRS)
Moss, James N.; Rault, Didier F. G.; Price, Joseph M.
1993-01-01
Results of calculations obtained using the direct simulation Monte Carlo method for Mach 25 flow over a control surface are presented. The numerical simulations are for a 35-deg compression ramp at a low-density wind-tunnel test condition. Calculations obtained using both two- and three-dimensional solutions are reviewed, and a qualitative comparison is made with the oil flow pictures highlight separation and three-dimensional flow structure.
Thomas, R S; Yang, R S; Morgan, D G; Moorman, M P; Kermani, H R; Sloane, R A; O'Connor, R W; Adkins, B; Gargas, M L; Andersen, M E
1996-01-01
During a 2-year chronic inhalation study on methylene chloride (2000 or 0 ppm; 6 hr/day, 5 days/week), gas-uptake pharmacokinetic studies and tissue partition coefficient determinations were conducted on female B6C3F1, mice after 1 day, 1 month, 1 year, and 2 years of exposure. Using physiologically based pharmacokinetic (PBPK) modeling coupled with Monte Carlo simulation and bootstrap resampling for data analyses, a significant induction in the mixed function oxidase (MFO) rate constant (Vmaxc) was observed at the 1-day and 1-month exposure points when compared to concurrent control mice while decreases in glutathione S-transferase (GST) rate constant (Kfc) were observed in the 1-day and 1-month exposed mice. Within exposure groups, the apparent Vmaxc maintained significant increases in the 1-month and 2-year control groups. Although the same initial increase exists in the exposed group, the 2-year Vmaxc is significantly smaller than the 1-month group (p < 0.001). Within group differences in median Kfc values show a significant decrease in both 1-month and 2-year groups among control and exposed mice (p < 0.001). Although no changes in methylene chloride solubility as a result of prior exposure were observed in blood, muscle, liver, or lung, a marginal decrease in the fat:air partition coefficient was found in the exposed mice at p = 0.053. Age related solubility differences were found in muscle:air, liver:air, lung:air, and fat:air partition coefficients at p < 0.001, while the solubility of methylene chloride in blood was not affected by age (p = 0.461). As a result of this study, we conclude that age and prior exposure to methylene chloride can produce notable changes in disposition and metabolism and may represent important factors in the interpretation for toxicologic data and its application to risk assessment. Images Figure 1. Figure 2. Figure 3. Figure 4. Figure 4. Figure 4. Figure 4. Figure 5. Figure 5. Figure 5. Figure 5. PMID:8875160
Schabel, Matthias C; Fluckiger, Jacob U; DiBella, Edward V R
2010-08-21
Widespread adoption of quantitative pharmacokinetic modeling methods in conjunction with dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has led to increased recognition of the importance of obtaining accurate patient-specific arterial input function (AIF) measurements. Ideally, DCE-MRI studies use an AIF directly measured in an artery local to the tissue of interest, along with measured tissue concentration curves, to quantitatively determine pharmacokinetic parameters. However, the numerous technical and practical difficulties associated with AIF measurement have made the use of population-averaged AIF data a popular, if sub-optimal, alternative to AIF measurement. In this work, we present and characterize a new algorithm for determining the AIF solely from the measured tissue concentration curves. This Monte Carlo blind estimation (MCBE) algorithm estimates the AIF from the subsets of D concentration-time curves drawn from a larger pool of M candidate curves via nonlinear optimization, doing so for multiple (Q) subsets and statistically averaging these repeated estimates. The MCBE algorithm can be viewed as a generalization of previously published methods that employ clustering of concentration-time curves and only estimate the AIF once. Extensive computer simulations were performed over physiologically and experimentally realistic ranges of imaging and tissue parameters, and the impact of choosing different values of D and Q was investigated. We found the algorithm to be robust, computationally efficient and capable of accurately estimating the AIF even for relatively high noise levels, long sampling intervals and low diversity of tissue curves. With the incorporation of bootstrapping initialization, we further demonstrated the ability to blindly estimate AIFs that deviate substantially in shape from the population-averaged initial guess. Pharmacokinetic parameter estimates for K(trans), k(ep), v(p) and v(e) all showed relative biases and
Radiation response of inorganic scintillators: Insights from Monte Carlo simulations
Prange, Micah P.; Wu, Dangxin; Xie, YuLong; Campbell, Luke W.; Gao, Fei; Kerisit, Sebastien N.
2014-07-24
The spatial and temporal scales of hot particle thermalization in inorganic scintillators are critical factors determining the extent of second- and third-order nonlinear quenching in regions with high densities of electron-hole pairs, which, in turn, leads to the light yield nonproportionality observed, to some degree, for all inorganic scintillators. Therefore, kinetic Monte Carlo simulations were performed to calculate the distances traveled by hot electrons and holes as well as the time required for the particles to reach thermal energy following γ-ray irradiation. CsI, a common scintillator from the alkali halide class of materials, was used as a model system. Two models of quasi-particle dispersion were evaluated, namely, the effective mass approximation model and a model that relied on the group velocities of electrons and holes determined from band structure calculations. Both models predicted rapid electron-hole pair recombination over short distances (a few nanometers) as well as a significant extent of charge separation between electrons and holes that did not recombine and reached thermal energy. However, the effective mass approximation model predicted much longer electron thermalization distances and times than the group velocity model. Comparison with limited experimental data suggested that the group velocity model provided more accurate predictions. Nonetheless, both models indicated that hole thermalization is faster than electron thermalization and thus is likely to be an important factor determining the extent of third-order nonlinear quenching in high-density regions. The merits of different models of quasi-particle dispersion are also discussed.
Diffuse photon density wave measurements and Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kuzmin, Vladimir L.; Neidrauer, Michael T.; Diaz, David; Zubkov, Leonid A.
2015-10-01
Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe-Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source-detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal-noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source-detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.
Diffuse photon density wave measurements and Monte Carlo simulations.
Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A
2015-10-01
Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.
Monte Carlo simulations of single and coupled synthetic molecular motors.
Chen, C-M; Zuckermann, M
2012-11-01
We use a minimal model to study the processive motion of coupled synthetic molecular motors along a DNA track and we present data from Monte Carlo (MC) computer simulations based on this model. The model was originally proposed by Bromley et al. [HFSP J. 3, 204 (2009)] for studying the properties of a synthetic protein motor, the "Tumbleweed" (TW), and involves rigid Y-shaped motors diffusively rotating along the track while controlled by a series of periodically injected ligand pulses into the solution. The advantage of the model is that it mimics the mechanical properties of the TW motor in detail. Both the average first passage time which measures the diffusive motion of the motors, and the average dwell time on the track which measures their processivity are investigated by varying the parameters of the model. The latter includes ligand concentration and the range and strength of the binding interaction between motors and the track. In particular, it is of experimental interest to study the dependence of these dynamic time scales of the motors on the ligand concentration. Single rigid TW motors were first studied since no previous MC simulations of these motors have been performed. We first studied single motors for which we found a logarithmic decrease of the average first passage time and a logarithmic increase of the average dwell time with increasing ligand concentration. For two coupled motors, the dependence on ligand concentration is still logarithmic for the average first passage time but becomes linear for the average dwell time. This suggests a much greater stability in the processive motion of coupled motors as compared to single motors in the limit of large ligand concentration. By increasing the number of coupled motors, m, it was found that the average first passage time of the coupled motors only increases slowly with m while the average dwell time increases exponentially with m. Thus the stability of coupled motors on the track can be
Simulations of the Domain State Model
2003-01-01
bulk of the antiferromagnet, the latter is diluted throughout its volume. Extensive Monte Carlo simulations of the model were performed in the past...that a corresponding theoretical model, the domain state model, investigated by Monte Carlo simulations shows a behavior very similar to the...discuss this in detail in the following. 15 RESULTS Monte Carlo methods are used with a heat-bath algorithm and single-spin flip dynamics [26] for the
Direct Simulation Monte Carlo Simulations of Low Pressure Semiconductor Plasma Processing
Gochberg, L. A.; Ozawa, T.; Deng, H.; Levin, D. A.
2008-12-31
The two widely used plasma deposition tools for semiconductor processing are Ionized Metal Physical Vapor Deposition (IMPVD) of metals using either planar or hollow cathode magnetrons (HCM), and inductively-coupled plasma (ICP) deposition of dielectrics in High Density Plasma Chemical Vapor Deposition (HDP-CVD) reactors. In these systems, the injected neutral gas flows are generally in the transonic to supersonic flow regime. The Hybrid Plasma Equipment Model (HPEM) has been developed and is strategically and beneficially applied to the design of these tools and their processes. For the most part, the model uses continuum-based techniques, and thus, as pressures decrease below 10 mTorr, the continuum approaches in the model become questionable. Modifications have been previously made to the HPEM to significantly improve its accuracy in this pressure regime. In particular, the Ion Monte Carlo Simulation (IMCS) was added, wherein a Monte Carlo simulation is used to obtain ion and neutral velocity distributions in much the same way as in direct simulation Monte Carlo (DSMC). As a further refinement, this work presents the first steps towards the adaptation of full DSMC calculations to replace part of the flow module within the HPEM. Six species (Ar, Cu, Ar*, Cu*, Ar{sup +}, and Cu{sup +}) are modeled in DSMC. To couple SMILE as a module to the HPEM, source functions for species, momentum and energy from plasma sources will be provided by the HPEM. The DSMC module will then compute a quasi-converged flow field that will provide neutral and ion species densities, momenta and temperatures. In this work, the HPEM results for a hollow cathode magnetron (HCM) IMPVD process using the Boltzmann distribution are compared with DSMC results using portions of those HPEM computations as an initial condition.
Edison, John R; Monson, Peter A
2013-06-21
This article addresses the accuracy of a dynamic mean field theory (DMFT) for fluids in porous materials [P. A. Monson, J. Chem. Phys. 128, 084701 (2008)]. The theory is used to study the relaxation processes of fluids in pores driven by step changes made to a bulk reservoir in contact with the pore. We compare the results of the DMFT to those obtained by averaging over large numbers of dynamic Monte Carlo (DMC) simulation trajectories. The problem chosen for comparison is capillary condensation in slit pores, driven by step changes in the chemical potential in the bulk reservoir and involving a nucleation process via the formation of a liquid bridge. The principal difference between the DMFT results and DMC is the replacement of a distribution of nucleation times and location along the pore for the formation of liquid bridges by a single time and location. DMFT is seen to yield an otherwise qualitatively accurate description of the dynamic behavior.
Monte Carlo simulation studies of diffusion in crowded environments
NASA Astrophysics Data System (ADS)
Nandigrami, Prithviraj; Grove, Brandy; Konya, Andrew; Selinger, Robin
Anomalous diffusion has been observed in protein solutions and other multi-component systems due to macromolecular crowding. Using Monte Carlo simulations, we investigate mechanisms that govern anomalous diffusive transport and pattern formation in a crowded mixture. We consider a multi-component lattice gas model with ``tracer'' molecules diffusing across a density gradient in a solution containing sticky ``crowder'' molecules that cluster to form dynamically evolving obstacles. The dependence of tracer flux on crowder density shows an intriguing re-entrant behavior as a function of temperature with three distinct temperature regimes. At high temperature, crowders segregate near the tracer sink but, for low enough overall crowder density, remain sufficiently disordered to allow continuous tracer flux. At intermediate temperature, crowders segregate and block tracer flux entirely, giving rise to complex pattern formation. At low temperature, crowders aggregate to form small, slowly diffusing obstacles. The resulting tracer flux shows scaling behavior near the percolation threshold, analogous to the scenario when the obstacles are fixed and randomly distributed. Our simulations predict distinct quantitative dependence of tracer flux on crowder density in these temperature limits.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
NASA Astrophysics Data System (ADS)
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-01
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-15
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
Monte Carlo simulations of single crystals from polymer solutions
NASA Astrophysics Data System (ADS)
Zhang, Jianing; Muthukumar, M.
2007-06-01
A novel "anisotropic aggregation" model is proposed to simulate nucleation and growth of polymer single crystals as functions of temperature and polymer concentration in dilute solutions. Prefolded chains in a dilute solution are assumed to aggregate at a seed nucleus with an anisotropic interaction by a reversible adsorption/desorption mechanism, with temperature, concentration, and seed size being the control variables. The Monte Carlo results of this model resolve the long-standing dilemma regarding the kinetic and thermal roughenings, by producing a rough-flat-rough transition in the crystal morphology with increasing temperature. It is found that the crystal growth rate varies nonlinearly with temperature and concentration without any marked transitions among any regimes of polymer crystallization kinetics. The induction time increases with decreasing the seed nucleus size, increasing temperature, or decreasing concentration. The apparent critical nucleus size is found to increase exponentially with increasing temperature or decreasing concentration, leading to a critical nucleus diagram composed in the temperature-concentration plane with three regions of different nucleation barriers: no growth, nucleation and growth, and spontaneous growth. Melting temperatures as functions of the crystal size, heating rate, and concentration are also reported. The present model, falling in the same category of small molecular crystallization with anisotropic interactions, captures most of the phenomenology of polymer crystallization in dilute solutions.
Monte Carlo modelling of positron transport in real world applications
NASA Astrophysics Data System (ADS)
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
Monte Carlo simulations of parapatric speciation
NASA Astrophysics Data System (ADS)
Schwämmle, V.; Sousa, A. O.; de Oliveira, S. M.
2006-06-01
Parapatric speciation is studied using an individual-based model with sexual reproduction. We combine the theory of mutation accumulation for biological ageing with an environmental selection pressure that varies according to the individuals geographical positions and phenotypic traits. Fluctuations and genetic diversity of large populations are crucial ingredients to model the features of evolutionary branching and are intrinsic properties of the model. Its implementation on a spatial lattice gives interesting insights into the population dynamics of speciation on a geographical landscape and the disruptive selection that leads to the divergence of phenotypes. Our results suggest that assortative mating is not an obligatory ingredient to obtain speciation in large populations at low gene flow.
Monte Carlo Studies of the Fcc Ising Model.
NASA Astrophysics Data System (ADS)
Polgreen, Thomas Lee
Monte Carlo simulations are performed on the antiferromagnetic fcc Ising model which is relevant to the binary alloy CuAu. The model exhibits a first-order ordering transition as a function of temperature. The lattice free energy of the model is determined for all temperatures. By matching free energies of the ordered and disordered phases, the transition temperature is determined to be T(,t) = 1.736 J where J is the coupling constant of the model. The free energy as determined by series expansion and the Kikuchi cluster variation method is compared with the Monte Carlo results. These methods work well for the ordered phase, but not for the disordered phase. A determination of the pair correlation in the disordered phase along the {100} direction indicates a correlation length of (DBLTURN) 2.5a at the phase transition. The correlation length exhibits mean-field-like temperature dependence. The Cowley-Warren short range order parameters are determined as a function of temperature for the first twelve nearest-neighbor shells of this model. The Monte Carlo results are used to determine the free parameter in a mean-field-like class of theories described by Clapp and Moss. The ability of these theories to predict ratios between pair potentials is tested with these results. In addition, evidence of a region of heterophase fluctuations is presented in agreement with x-ray diffuse scattering measurements on Cu(,3)Au. The growth of order following a rapid quench from disorder is studied by means of a dynamic Monte Carlo simulation. The results compare favorably with the Landau theory proposed by Chan for temperatures near the first-order phase transition. For lower temperatures, the results are in agreement with the theories of Lifshitz and Allen and Chan. In the intermediate temperature range, our extension of Chan's theory is able to explain our simulation results and recent experimental results.
Monte Carlo simulation with fixed steplength for diffusion processes in nonhomogeneous media
NASA Astrophysics Data System (ADS)
Ruiz Barlett, V.; Hoyuelos, M.; Mártin, H. O.
2013-04-01
Monte Carlo simulation is one of the most important tools in the study of diffusion processes. For constant diffusion coefficients, an appropriate Gaussian distribution of particle's steplengths can generate exact results, when compared with integration of the diffusion equation. It is important to notice that the same method is completely erroneous when applied to non-homogeneous diffusion coefficients. A simple alternative, jumping at fixed steplengths with appropriate transition probabilities, produces correct results. Here, a model for diffusion of calcium ions in the neuromuscular junction of the crayfish is used as a test to compare Monte Carlo simulation with fixed and Gaussian steplength.
Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.
Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo
2014-10-14
Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.
A Monte Carlo simulation approach for flood risk assessment
NASA Astrophysics Data System (ADS)
Agili, Hachem; Chokmani, Karem; Oubennaceur, Khalid; Poulin, Jimmy; Marceau, Pascal
2016-04-01
Floods are the most frequent natural disaster and the most damaging in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which hit the region in 2011 causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects of buildings has been developed. This approach integrates three main components namely hydrological modeling through flow-probability functions, hydraulic modeling using flow-submersion height functions and the study of buildings damage based on damage functions adapted to the Quebec habitat. The application of this approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for local authorities to support their decisions on risk management and prevention against this disaster.
Quantum Monte Carlo simulations for disordered Bose systems
Trivedi, N.
1992-03-01
Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or Bose glass'' insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.
Quantum Monte Carlo simulations for disordered Bose systems
Trivedi, N.
1992-03-01
Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or ``Bose glass`` insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.
Burrows, John
2013-04-01
An introduction to the use of the mathematical technique of Monte Carlo simulations to evaluate least squares regression calibration is described. Monte Carlo techniques involve the repeated sampling of data from a population that may be derived from real (experimental) data, but is more conveniently generated by a computer using a model of the analytical system and a randomization process to produce a large database. Datasets are selected from this population and fed into the calibration algorithms under test, thus providing a facile way of producing a sufficiently large number of assessments of the algorithm to enable a statically valid appraisal of the calibration process to be made. This communication provides a description of the technique that forms the basis of the results presented in Parts II and III of this series, which follow in this issue, and also highlights the issues arising from the use of small data populations in bioanalysis.
Wang, Jianhua; Zhang, Hualin
2008-04-01
A recently developed alternative brachytherapy seed, Cs-1 Rev2 cesium-131, has begun to be used in clinical practice. The dosimetric characteristics of this source in various media, particularly in human tissues, have not been fully evaluated. The aim of this study was to calculate the dosimetric parameters for the Cs-1 Rev2 cesium-131 seed following the recommendations of the AAPM TG-43U1 report [Rivard et al., Med. Phys. 31, 633-674 (2004)] for new sources in brachytherapy applications. Dose rate constants, radial dose functions, and anisotropy functions of the source in water, Virtual Water, and relevant human soft tissues were calculated using MCNP5 Monte Carlo simulations following the TG-43U1 formalism. The results yielded dose rate constants of 1.048, 1.024, 1.041, and 1.044 cGy h(-1) U(-1) in water, Virtual Water, muscle, and prostate tissue, respectively. The conversion factor for this new source between water and Virtual Water was 1.02, between muscle and water was 1.006, and between prostate and water was 1.004. The authors' calculation of anisotropy functions in a Virtual Water phantom agreed closely with Murphy's measurements [Murphy et al., Med. Phys. 31, 1529-1538 (2004)]. Our calculations of the radial dose function in water and Virtual Water have good agreement with those in previous experimental and Monte Carlo studies. The TG-43U1 parameters for clinical applications in water, muscle, and prostate tissue are presented in this work.
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
Multi-pass Monte Carlo simulation method in nuclear transmutations.
Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M
2016-12-01
Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10(25) or 10(26) members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10(25). Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10(28) steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors.
NASA Astrophysics Data System (ADS)
Robl, Jörg; Hergarten, Stefan
2015-04-01
Debris flows are globally abundant threats for settlements and infrastructure in mountainous regions. Crucial influencing factors for hazard zone planning and mitigation strategies are based on numerical models that describe granular flow on general topography by solving a depth-averaged form of the Navier Stokes equations in combination with an appropriate flow resistance law. In case of debris flows, the Voellmy rheology is a widely used constitutive law describing the flow resistance. It combines a velocity independent Coulomb friction term with a term proportional to the square of the velocity as it is commonly used for turbulent flow. Parameters of the Vollemy fluid are determined by back analysis from observed events so that modelled events mimic their historical counterparts. Determined parameters characterizing individual debris flows show a large variability (related to fluid composition and surface roughness). However, there may be several sets of parameters that lead to a similar depositional pattern but cause large differences in flow velocity and momentum along the flow path. Fluid volumes of hazardous debris flows are estimated by analyzing historic events, precipitation time series, hydrographs or empirical relationships that correlate fluid volumes and drainage areas of torrential catchments. Beside uncertainties in the determination of the fluid volume the position and geometry of the initial masses of forthcoming debris flows are in general not well constrained but heavily influence the flow dynamics and the depositional pattern even in the run-out zones. In this study we present a new, freely available numerical description of rapid mass movements based on the GERRIS framework and early results of a Monte Carlo simulation exploring effects of the aforementioned parameters on run-out distance, inundated area and momentum. The novel numerical model describes rapid mass movements on complex topography using the shallow water equations in Cartesian
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh Sun, Mingshan; Abel, Eric; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca
2014-03-15
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 10{sup 7} − 10{sup 9} detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh; Sun, Mingshan; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca; Abel, Eric
2014-01-01
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 107 − 109 detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation
Application of Monte Carlo simulations to improve basketball shooting strategy
NASA Astrophysics Data System (ADS)
Min, Byeong June
2016-10-01
The underlying physics of basketball shooting seems to be a straightforward example of Newtonian mechanics that can easily be traced by using numerical methods. However, a human basketball player does not make use of all the possible basketball trajectories. Instead, a basketball player will build up a database of successful shots and select the trajectory that has the greatest tolerance to the small variations of the real world. We simulate the basketball player's shooting training as a Monte Carlo sequence to build optimal shooting strategies, such as the launch speed and angle of the basketball, and whether to take a direct shot or a bank shot, as a function of the player's court position and height. The phase-space volume Ω that belongs to the successful launch velocities generated by Monte Carlo simulations is then used as the criterion to optimize a shooting strategy that incorporates not only mechanical, but also human, factors.
Computer Monte Carlo simulation in quantitative resource estimation
Root, D.H.; Menzie, W.D.; Scott, W.A.
1992-01-01
The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.
Panagiotopoulos, A.Z.
1992-06-08
Objective is to develop molecular simulation techniques for phase equilibria in complex systems. The Gibbs ensemble Monte Carlo method was extended to obtain phase diagrams for highly asymmetric and ionic fluids. The modified Widom test particle technique was developed for chemical potentials of long polymeric molecules, and preliminary calculations of phase behavior of simple model homopolymers were performed.
Exact calculations of phase and membrane equilibria for complex fluids by Monte Carlo simulation
Panagiotopoulos, A.Z.
1992-06-08
Objective is to develop molecular simulation techniques for phase equilibria in complex systems. The Gibbs ensemble Monte Carlo method was extended to obtain phase diagrams for highly asymmetric and ionic fluids. The modified Widom test particle technique was developed for chemical potentials of long polymeric molecules, and preliminary calculations of phase behavior of simple model homopolymers were performed.
Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study
ERIC Educational Resources Information Center
Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick
2017-01-01
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…
PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Bartel, T.J.
1998-12-01
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Bartel, Timothy J.
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
NASA Astrophysics Data System (ADS)
Cortes, Joaquin; Araya, Paulo
1991-11-01
Making use of Monte Carlo experiments, a simulation has been carried out of the adsorption of a gas on heterogeneous solids characterized by energy distribution and a random topography of the superficial sites. A good interpretation of the results is achieved by means of the theoretical models introduced by Hill, and later by Rudzinsky, for these types of systems.
Monte Carlo simulation of air sampling methods for the measurement of radon decay products.
Sima, Octavian; Luca, Aurelian; Sahagia, Maria
2017-02-21
A stochastic model of the processes involved in the measurement of the activity of the (222)Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the (222)Rn decay products concentrations in the air are realistically evaluated.
Teacher's Corner: Using SAS for Monte Carlo Simulation Research in SEM
ERIC Educational Resources Information Center
Fan, Xitao; Fan, Xiaotao
2005-01-01
This article illustrates the use of the SAS system for Monte Carlo simulation work in structural equation modeling (SEM). Data generation procedures for both multivariate normal and nonnormal conditions are discussed, and relevant SAS codes for implementing these procedures are presented. A hypothetical example is presented in which Monte Carlo…
Monte Carlo simulation of two-component aerosol processes
NASA Astrophysics Data System (ADS)
Huertas, Jose Ignacio
Aerosol processes have been extensively used for production of nanophase materials. However when temperatures and number densities are high, particle agglomeration is a serious drawback for these techniques. This problem can be addressed by encapsulating the particles with a second material before they agglomerate. These particles will agglomerate but the primary particles within them will not. When the encapsulation is later removed, the resulting powder will contain only weakly agglomerated particles. To demonstrate the applicability of the particle encapsulation method for the production of high purity unagglomerated nanosize materials, tungsten (W) and tungsten titanium alloy (W-Ti) particles were synthesized in a sodium/halide flame. The particles were characterized by XRD, SEM, TEM and EDAX. The particles appeared unagglomerated, cubic and hexagonal in shape, and had a size of 30-50 nm. No contamination was detected even after extended exposure to atmospheric conditions. The nanosized W and W-Ti particles were consolidated into pellets of 6 mm diameter and 6-8 mm long. Hardness measurements indicate values 4 times that of conventional tungsten. 100% densification was achieved by hipping the samples. To study the particle encapsulation method, a code to simulate particle formation in two component aerosols was developed. The simulation was carried out using a Monte Carlo technique. This approach allowed for the treatment of both probabilistic and deterministic events. Thus, the coagulation term of the general dynamic equation (GDE) was Monte Carlo simulated, and the condensation term was solved analytically and incorporated into the model. The model includes condensation, coagulation, sources, and sinks for two-component aerosol processes. The Kelvin effect has been included in the model as well. The code is general and does not suffer from problems associated with mass conservation, high rates of condensation and approximations on particle composition. It has
Monte Carlo Simulation of Aqueous Dilute Solutions of Polyhydric Alcohols
NASA Astrophysics Data System (ADS)
Lilly, Arnys Clifton, Jr.
In order to investigate the details of hydrogen bonding and solution molecular conformation of complex alcohols in water, isobaric-isothermal Monte Carlo simulations were carried out on several systems. The solutes investigated were ethanol, ethylene glycol, 1,2-propylene glycol, 1,3 -propylene glycol and glycerol. In addition, propane, which does not hydrogen bond but does form water hydrates, was simulated in aqueous solution. The complex alcohol-water systems are very nonideal in their behavior as a function of solute concentration down to very dilute solutions. The water model employed was TIP4P water^1 and the intermolecular potentials employed are of the Jorgensen type^2 in which the interactions between the molecules are represented by interaction sites usually located on nuclei. The interactions are represented by a sum of Coulomb and Lennard-Jones terms between all intermolecular pairs of sites. Intramolecular rotations in the solute are modeled by torsional potential energy functions taken from ethanol, 1-propanol and 2-propanol for C-O and C-C bond rotations. Quasi-component pair correlation functions were used to analyze the hydrogen bonding. Hydrogen bonds were classified as proton acceptor and proton donor bonds by analyzing the nearest neighbor pair correlation function between hydroxyl oxygen and hydrogen and between solvent-water hydrogen and oxygen. The results obtained for partial molar heats of solution are more negative than experimental values by 3.0 to 14 kcal/mol. In solution, all solutes reached a contracted molecular geometry with the OH groups generally on one side of the molecule. There is a tendency for the solute OH groups to hydrogen bond with water, with more proton acceptor bonds than proton donor bonds. The water -solute binding energies correlate with experimental measurements of the water-binding properties of the solute. ftn ^1Jorgensen, W. L. et al, J. Chem. Phys., 79, 926 (1983). ^2Jorgensen, W. L., J. Phys Chem., 87, 5304
Monte Carlo simulation of the neutron monitor yield function
NASA Astrophysics Data System (ADS)
Mangeard, P.-S.; Ruffolo, D.; Sáiz, A.; Madlee, S.; Nutaro, T.
2016-08-01
Neutron monitors (NMs) are ground-based detectors that measure variations of the Galactic cosmic ray flux at GV range rigidities. Differences in configuration, electronics, surroundings, and location induce systematic effects on the calculation of the yield functions of NMs worldwide. Different estimates of NM yield functions can differ by a factor of 2 or more. In this work, we present new Monte Carlo simulations to calculate NM yield functions and perform an absolute (not relative) comparison with the count rate of the Princess Sirindhorn Neutron Monitor (PSNM) at Doi Inthanon, Thailand, both for the entire monitor and for individual counter tubes. We model the atmosphere using profiles from the Global Data Assimilation System database and the Naval Research Laboratory Mass Spectrometer, Incoherent Scatter Radar Extended model. Using FLUKA software and the detailed geometry of PSNM, we calculated the PSNM yield functions for protons and alpha particles. An agreement better than 9% was achieved between the PSNM observations and the simulated count rate during the solar minimum of December 2009. The systematic effect from the electronic dead time was studied as a function of primary cosmic ray rigidity at the top of the atmosphere up to 1 TV. We show that the effect is not negligible and can reach 35% at high rigidity for a dead time >1 ms. We analyzed the response function of each counter tube at PSNM using its actual dead time, and we provide normalization coefficients between count rates for various tube configurations in the standard NM64 design that are valid to within ˜1% for such stations worldwide.
Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.
1999-09-01
Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
Monte Carlo simulation of the ELIMED beamline using Geant4
NASA Astrophysics Data System (ADS)
Pipek, J.; Romano, F.; Milluzzo, G.; Cirrone, G. A. P.; Cuttone, G.; Amico, A. G.; Margarone, D.; Larosa, G.; Leanza, R.; Petringa, G.; Schillaci, F.; Scuderi, V.
2017-03-01
In this paper, we present a Geant4-based Monte Carlo application for ELIMED beamline [1-6] simulation, including its features and several preliminary results. We have developed the application to aid the design of the beamline, to estimate various beam characteristics, and to assess the amount of secondary radiation. In future, an enhanced version of this application will support the beamline users when preparing their experiments.
Monte Carlo-based simulation of dynamic jaws tomotherapy
Sterpin, E.; Chen, Y.; Chen, Q.; Lu, W.; Mackie, T. R.; Vynckier, S.
2011-09-15
Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is greater than 30% of the prescription dose (gamma analysis
Monte Carlo simulation of particle acceleration at astrophysical shocks
NASA Technical Reports Server (NTRS)
Campbell, Roy K.
1989-01-01
A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better
Monte Carlo Simulations of the Response of the MARIE Instrument
NASA Technical Reports Server (NTRS)
Andersen, V.; Lee, K.; Pinsky, L.; Atwell, W.; Cleghorn, T.; Cucinotta, F.; Saganti, P.; Turner, R.; Zeitlin, C.
2003-01-01
The MARIE instrument aboard Mars Odyssey functions as a telescope for the detection of charged, energetic, nuclei. The directionality that leads to the telescope description is achieved by requiring coincident signals in two designated detectors in MARIE s silicon detector stack for the instrument to trigger. Because of this, MARIE is actually a bi directional telescope. Triggering particles can enter the detector stack by passing through the lightly shielded front of the instrument, but can also enter the back of the instrument by passing through the bulk of Odyssey. Because of this, understanding how to relate the signals recorded by MARIE to astrophysically important quantities such as particle fluxes or spectra exterior to the spacecraft clearly requires detailed modeling of the physical interactions that occur as the particles pass through the spacecraft and the instrument itself. In order to facilitate in the calibration of the MARIE data, we have begun a program to simulate the response of MARIE using the FLUKA [1] [2] Monte Carlo radiation transport code.
Parallel Performance Optimization of the Direct Simulation Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas
2009-11-01
Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.
Monte Carlo Simulations of the Inside Intron Recombination
NASA Astrophysics Data System (ADS)
Cebrat, Stanisław; PȨKALSKI, Andrzej; Scharf, Fabian
Biological genomes are divided into coding and non-coding regions. Introns are non-coding parts within genes, while the remaining non-coding parts are intergenic sequences. To study evolutionary significance of the inside intron recombination we have used two models based on the Monte Carlo method. In our computer simulations we have implemented the internal structure of genes by declaring the probability of recombination between exons. One situation when inside intron recombination is advantageous is recovering functional genes by combining proper exons dispersed in the genetic pool of the population after a long period without selection for the function of the gene. Populations have to pass through the bottleneck, then. These events are rather rare and we have expected that there should be other phenomena giving profits from the inside intron recombination. In fact we have found that inside intron recombination is advantageous only in the case when after recombination, besides the recombinant forms, parental haplotypes are available and selection is set already on gametes.
Pan, Tianshu; Rasmussen, John C; Lee, Jae Hoon; Sevick-Muraca, Eva M
2007-04-01
Recently, we have presented and experimentally validated a unique numerical solver of the coupled radiative transfer equations (RTEs) for rapidly computing time-dependent excitation and fluorescent light propagation in small animal tomography. Herein, we present a time-dependent Monte Carlo algorithm to validate the forward RTE solver and investigate the impact of physical parameters upon transport-limited measurements in order to best direct the development of the RTE solver for optical tomography. Experimentally, the Monte Carlo simulations for both transport-limited and diffusion-limited propagations are validated using frequency domain photon migration measurements for 1.0%, 0.5%, and 0.2% intralipid solutions containing 1 microM indocyanine green in a 49 cm3 cylindrical phantom corresponding to the small volume employed in small animal tomography. The comparisons between Monte Carlo simulations and the numerical solutions result in mean percent error in amplitude and the phase shift less than 5.0% and 0.7 degrees, respectively, at excitation and emission wavelengths for varying anisotropic factors, lifetimes, and modulation frequencies. Monte Carlo simulations indicate that the accuracy of the forward model is enhanced using (i) suitable source models of photon delivery, (ii) accurate anisotropic factors, and (iii) accurate acceptance angles of collected photons. Monte Carlo simulations also show that the accuracy of the diffusion approximation in the small phantom depends upon (i) the ratio d(phantom)/l(tr), where d(phantom) is the phantom diameter and l(tr) is the transport mean free path; and (ii) the anisotropic factor of the medium. The Monte Carlo simulations validates and guides the future development of an appropriate RTE solver for deployment in small animal optical tomography.
Monte Carlo modeling of the resurfacing of Venus
NASA Technical Reports Server (NTRS)
Bullock, M. A.; Grinspoon, David H.; Head, James W., III
1992-01-01
We have developed a three-dimensional model of venusian resurfacing that employs Monte Carlo simulations of both impact cratering and volcanism. The model simulates the production of craters on Venus by using the observed mass distributions of Earth- and Venus-crossing asteroids and comets. Lava flows are modeled by an energy minimization technique to simulate the effects of local topography on the shape and extent of flows. The model is run under a wide range of assumptions regarding the scale and time evolution of volcanism on Venus. Regions of the parameter space that result in impact crater distributions and modifications that are currently observed will be explored to place limits on the possible volcanic resurfacing history of Venus.
More about Zener drag studies with Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Di Prinzio, Carlos L.; Druetta, Esteban; Nasello, Olga Beatriz
2013-03-01
Grain growth (GG) processes in the presence of second-phase and stationary particles have been widely studied but the results found are inconsistent. We present new GG simulations in two- and three-dimensional (2D and 3D) polycrystalline samples with second phase stationary particles, using the Monte Carlo technique. Simulations using values of particle concentration greater than 15% and particle radii different from 1 or 3 are performed, thus covering a range of particle radii and concentrations not previously studied. It is shown that only the results for 3D samples follow Zener's law.
Dendritic growth shapes in kinetic Monte Carlo models
NASA Astrophysics Data System (ADS)
Krumwiede, Tim R.; Schulze, Tim P.
2017-02-01
For the most part, the study of dendritic crystal growth has focused on continuum models featuring surface energies that yield six pointed dendrites. In such models, the growth shape is a function of the surface energy anisotropy, and recent work has shown that considering a broader class of anisotropies yields a correspondingly richer set of growth morphologies. Motivated by this work, we generalize nanoscale models of dendritic growth based on kinetic Monte Carlo simulation. In particular, we examine the effects of extending the truncation radius for atomic interactions in a bond-counting model. This is done by calculating the model’s corresponding surface energy and equilibrium shape, as well as by running KMC simulations to obtain nanodendritic growth shapes. Additionally, we compare the effects of extending the interaction radius in bond-counting models to that of extending the number of terms retained in the cubic harmonic expansion of surface energy anisotropy in the context of continuum models.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders
Viveros-Méndez, P. X. Aranda-Espinoza, S.
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e{sup 2}/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions L{sub x} ≈ L{sub y} and L{sub z} = 5L{sub x}, where L{sub x}, L{sub y}, and L{sub z} are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders.
Viveros-Méndez, P X; Gil-Villegas, Alejandro; Aranda-Espinoza, S
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e(2)/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions Lx ≈ Ly and Lz = 5Lx, where Lx, Ly, and Lz are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.
NASA Astrophysics Data System (ADS)
Lucci, Luca; Palestri, Pierpaolo; Esseni, David; Selmi, Luca
2005-09-01
In this paper, we present simulations of some of the most relevant transport properties of the inversion layer of ultra-thin film SOI devices with a self-consistent Monte-Carlo transport code for a confined electron gas. We show that size induced quantization not only decreases the low-field mobility (as experimentally found in [Uchida K, Koga J, Ohba R, Numata T, Takagi S. Experimental eidences of quantum-mechanical effects on low-field mobility, gate-channel capacitance and threshold voltage of ultrathin body SOI MOSFETs, IEEE IEDM Tech Dig 2001;633-6; Esseni D, Mastrapasqua M, Celler GK, Fiegna C, Selmi L, Sangiorgi E. Low field electron and hole mobility of SOI transistors fabricated on ultra-thin silicon films for deep sub-micron technology application. IEEE Trans Electron Dev 2001;48(12):2842-50; Esseni D, Mastrapasqua M, Celler GK, Fiegna C, Selmi L, Sangiorgi E, An experimental study of mobility enhancement in ultra-thin SOI transistors operated in double-gate mode, IEEE Trans Electron Dev 2003;50(3):802-8. [1-3
NASA Astrophysics Data System (ADS)
Belinato, W.; Santos, W. S.; Paschoal, C. M. M.; Souza, D. N.
2015-06-01
The combination of positron emission tomography (PET) and computed tomography (CT) has been extensively used in oncology for diagnosis and staging of tumors, radiotherapy planning and follow-up of patients with cancer, as well as in cardiology and neurology. This study determines by the Monte Carlo method the internal organ dose deposition for computational phantoms created by multidetector CT (MDCT) beams of two PET/CT devices operating with different parameters. The different MDCT beam parameters were largely related to the total filtration that provides a beam energetic change inside the gantry. This parameter was determined experimentally with the Accu-Gold Radcal measurement system. The experimental values of the total filtration were included in the simulations of two MCNPX code scenarios. The absorbed organ doses obtained in MASH and FASH phantoms indicate that bowtie filter geometry and the energy of the X-ray beam have significant influence on the results, although this influence can be compensated by adjusting other variables such as the tube current-time product (mAs) and pitch during PET/CT procedures.
NASA Astrophysics Data System (ADS)
Edison, John R.; Monson, Peter A.
2013-06-01
This article addresses the accuracy of a dynamic mean field theory (DMFT) for fluids in porous materials [P. A. Monson, J. Chem. Phys. 128, 084701 (2008)], 10.1063/1.2837287. The theory is used to study the relaxation processes of fluids in pores driven by step changes made to a bulk reservoir in contact with the pore. We compare the results of the DMFT to those obtained by averaging over large numbers of dynamic Monte Carlo (DMC) simulation trajectories. The problem chosen for comparison is capillary condensation in slit pores, driven by step changes in the chemical potential in the bulk reservoir and involving a nucleation process via the formation of a liquid bridge. The principal difference between the DMFT results and DMC is the replacement of a distribution of nucleation times and location along the pore for the formation of liquid bridges by a single time and location. DMFT is seen to yield an otherwise qualitatively accurate description of the dynamic behavior.
NASA Astrophysics Data System (ADS)
Erwin, Daniel A.; Pham-van-Diep, Gerald C.; Muntz, E. Phillip
1991-04-01
One-dimensional shock wave properties in helium and argon are predicted using Monte Carlo direct simulation. The collision model is based directly on the interatomic potential taking angular scattering into account. The potential is assumed to be of the Maitland-Smith n(r)-6 form. The detailed validity of the simulation is studied by comparing the predicted macroscopic and miroscopic flow properties in shock waves to a wide range of available data.
NASA Astrophysics Data System (ADS)
Erwin, Daniel A.; Pham-Van-Diep, Gerald C.; Muntz, E. Phillip
1991-04-01
One-dimensional shock wave properties in helium and argon are predicted using Monte Carlo direct simulation. The collision model is based directly on the interatomic potential, taking angular scattering into account. The potential is assumed to be of the Maitland-Smith [n(r)-6] form. The detailed validity of the simulation is studied by comparing the predicted macroscopic and microscopic flow properties in shock waves to a wide range of available data.
NASA Technical Reports Server (NTRS)
Erwin, Daniel A.; Pham-Van-diep, Gerald C.; Muntz, E. Phillip
1991-01-01
One-dimensional shock wave properties in helium and argon are predicted using Monte Carlo direct simulation. The collision model is based directly on the interatomic potential taking angular scattering into account. The potential is assumed to be of the Maitland-Smith n(r)-6 form. The detailed validity of the simulation is studied by comparing the predicted macroscopic and miroscopic flow properties in shock waves to a wide range of available data.
Global Monte Carlo Simulation with High Order Polynomial Expansions
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-12-13
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source
Monte Carlo simulations of single and coupled synthetic molecular motors
NASA Astrophysics Data System (ADS)
Chen, C.-M.; Zuckermann, M.
2012-11-01
We use a minimal model to study the processive motion of coupled synthetic molecular motors along a DNA track and we present data from Monte Carlo (MC) computer simulations based on this model. The model was originally proposed by Bromley [HFSP J.10.2976/1.3111282 3, 204 (2009)] for studying the properties of a synthetic protein motor, the “Tumbleweed” (TW), and involves rigid Y-shaped motors diffusively rotating along the track while controlled by a series of periodically injected ligand pulses into the solution. The advantage of the model is that it mimics the mechanical properties of the TW motor in detail. Both the average first passage time which measures the diffusive motion of the motors, and the average dwell time on the track which measures their processivity are investigated by varying the parameters of the model. The latter includes ligand concentration and the range and strength of the binding interaction between motors and the track. In particular, it is of experimental interest to study the dependence of these dynamic time scales of the motors on the ligand concentration. Single rigid TW motors were first studied since no previous MC simulations of these motors have been performed. We first studied single motors for which we found a logarithmic decrease of the average first passage time and a logarithmic increase of the average dwell time with increasing ligand concentration. For two coupled motors, the dependence on ligand concentration is still logarithmic for the average first passage time but becomes linear for the average dwell time. This suggests a much greater stability in the processive motion of coupled motors as compared to single motors in the limit of large ligand concentration. By increasing the number of coupled motors, m, it was found that the average first passage time of the coupled motors only increases slowly with m while the average dwell time increases exponentially with m. Thus the stability of coupled motors on the track can
Automatic determination of primary electron beam parameters in Monte Carlo simulation
Pena, Javier; Gonzalez-Castano, Diego M.; Gomez, Faustino; Sanchez-Doblado, Francisco; Hartmann, Guenther H.
2007-03-15
In order to obtain realistic and reliable Monte Carlo simulations of medical linac photon beams, an accurate determination of the parameters that define the primary electron beam that hits the target is a fundamental step. In this work we propose a new methodology to commission photon beams in Monte Carlo simulations that ensures the reproducibility of a wide range of clinically useful fields. For such purpose accelerated Monte Carlo simulations of 2x2, 10x10, and 20x20 cm{sup 2} fields at SSD=100 cm are carried out for several combinations of the primary electron beam mean energy and radial FWHM. Then, by performing a simultaneous comparison with the correspondent measurements for these same fields, the best combination is selected. This methodology has been employed to determine the characteristics of the primary electron beams that best reproduce a Siemens PRIMUS and a Varian 2100 CD machine in the Monte Carlo simulations. Excellent agreements were obtained between simulations and measurements for a wide range of field sizes. Because precalculated profiles are stored in databases, the whole commissioning process can be fully automated, avoiding manual fine-tunings. These databases can also be used to characterize any accelerators of the same model from different sites.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
Monte Carlo simulations of electron transport in strongly attaching gases
NASA Astrophysics Data System (ADS)
Petrovic, Zoran; Miric, Jasmina; Simonovic, Ilija; Bosnjakovic, Danko; Dujko, Sasa
2016-09-01
Extensive loss of electrons in strongly attaching gases imposes significant difficulties in Monte Carlo simulations at low electric field strengths. In order to compensate for such losses, some kind of rescaling procedures must be used. In this work, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The discrete rescaling procedure is based on duplication of electrons randomly chosen from the remaining swarm at certain discrete time steps. The continuous rescaling procedure employs a dynamically defined fictitious ionization process with the constant collision frequency chosen to be equal to the attachment collision frequency. These procedures should not in any way modify the distribution function. Monte Carlo calculations of transport coefficients for electrons in SF6 and CF3I are performed in a wide range of electric field strengths. However, special emphasis is placed upon the analysis of transport phenomena in the limit of lower electric fields where the transport properties are strongly affected by electron attachment. Two important phenomena arise: (1) the reduction of the mean energy with increasing E/N for electrons in SF6, and (2) the occurrence of negative differential conductivity in the bulk drift velocity of electrons in both SF6 and CF3I.
Monte Carlo studies of model Langmuir monolayers.
Opps, S B; Yang, B; Gray, C G; Sullivan, D E
2001-04-01
This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard
Monte Carlo studies of model Langmuir monolayers
NASA Astrophysics Data System (ADS)
Opps, S. B.; Yang, B.; Gray, C. G.; Sullivan, D. E.
2001-04-01
This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters σhh and σtt, respectively. The tails consist of nt~4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with σhh=σtt, we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'2/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in Tc with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in Tc due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard surface, whereby the surfactants are only
Accelerating particle-in-cell simulations using multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2015-11-01
Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.
The proton therapy nozzles at Samsung Medical Center: A Monte Carlo simulation study using TOPAS
NASA Astrophysics Data System (ADS)
Chung, Kwangzoo; Kim, Jinsung; Kim, Dae-Hyun; Ahn, Sunghwan; Han, Youngyih
2015-07-01
To expedite the commissioning process of the proton therapy system at Samsung Medical Center (SMC), we have developed a Monte Carlo simulation model of the proton therapy nozzles by using TOol for PArticle Simulation (TOPAS). At SMC proton therapy center, we have two gantry rooms with different types of nozzles: a multi-purpose nozzle and a dedicated scanning nozzle. Each nozzle has been modeled in detail following the geometry information provided by the manufacturer, Sumitomo Heavy Industries, Ltd. For this purpose, the novel features of TOPAS, such as the time feature or the ridge filter class, have been used, and the appropriate physics models for proton nozzle simulation have been defined. Dosimetric properties, like percent depth dose curve, spreadout Bragg peak (SOBP), and beam spot size, have been simulated and verified against measured beam data. Beyond the Monte Carlo nozzle modeling, we have developed an interface between TOPAS and the treatment planning system (TPS), RayStation. An exported radiotherapy (RT) plan from the TPS is interpreted by using an interface and is then translated into the TOPAS input text. The developed Monte Carlo nozzle model can be used to estimate the non-beam performance, such as the neutron background, of the nozzles. Furthermore, the nozzle model can be used to study the mechanical optimization of the design of the nozzle.
Monte Carlo simulation of light propagation in the adult brain
NASA Astrophysics Data System (ADS)
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter
2004-06-01
When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.
Quantifying the Effect of Undersampling in Monte Carlo Simulations Using SCALE
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This study explores the effect of undersampling in Monte Carlo calculations on tally estimates and tally variance estimates for burnup credit applications. Steady-state Monte Carlo simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity and the impact of undersampling on eigenvalue and flux estimates was examined. Using an inadequate number of particle histories in each generation was found to produce an approximately 100 pcm bias in the eigenvalue estimates, and biases that exceeded 10% in fuel pin flux estimates.
Numerical Demonstration of Source Convergence Issues in Monte Carlo Eigenvalue Simulations
Petrovic, Bojan
2001-06-17
Monte Carlo is potentially the most accurate method for modeling particle transport since it allows detailed geometry representation and use of pointwise cross sections. Its statistical nature, however, introduces additional concerns related to the reliability of the results. This is a special concern in eigenvalue Monte Carlo simulations because the possibility of significantly underestimating the eigenvalue is real. A series of test problems is introduced and utilized to demonstrate and clarify some of the source convergence issues. Analysis of the results is intended to help formulate improved diagnostic methods for identifying or preventing false source convergence.
A Grand Canonical Monte Carlo-Brownian dynamics algorithm for simulating ion channels.
Im, W; Seefeld, S; Roux, B
2000-01-01
A computational algorithm based on Grand Canonical Monte Carlo (GCMC) and Brownian Dynamics (BD) is described to simulate the movement of ions in membrane channels. The proposed algorithm, GCMC/BD, allows the simulation of ion channels with a realistic implementation of boundary conditions of concentration and transmembrane potential. The method is consistent with a statistical mechanical formulation of the equilibrium properties of ion channels (; Biophys. J. 77:139-153). The GCMC/BD algorithm is illustrated with simulations of simple test systems and of the OmpF porin of Escherichia coli. The approach provides a framework for simulating ion permeation in the context of detailed microscopic models. PMID:10920012
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C
2010-12-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.
Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations
NASA Astrophysics Data System (ADS)
Malladi, Mayank
Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are
[Monte Carlo simulation of the divergent beam propagation in a semi-infinite bio-tissue].
Zhang, Lin; Qi, Shengwen
2013-12-01
In order to study the light propagation in biological tissue, we analyzed the divergent beam propagation in turbid medium. We set up a Monte Carlo simulation model for simulating the divergent beam propagation in a semi-infinite bio-tissue. Using this model, we studied the absorbed photon density with different tissue parameters in the case of a divergent beam injecting the tissue. The simulation results showed that the rules of optical propagation in the tissue were found and further the results also suggested that the diagnosis and treatment of the light could refer to the rules of optical propagation.
Lanczos and Recursion Techniques for Multiscale Kinetic Monte Carlo Simulations
Rudd, R E; Mason, D R; Sutton, A P
2006-03-13
We review an approach to the simulation of the class of microstructural and morphological evolution involving both relatively short-ranged chemical and interfacial interactions and long-ranged elastic interactions. The calculation of the anharmonic elastic energy is facilitated with Lanczos recursion. The elastic energy changes affect the rate of vacancy hopping, and hence the rate of microstructural evolution due to vacancy mediated diffusion. The elastically informed hopping rates are used to construct the event catalog for kinetic Monte Carlo simulation. The simulation is accelerated using a second order residence time algorithm. The effect of elasticity on the microstructural development has been assessed. This article is related to a talk given in honor of David Pettifor at the DGP60 Workshop in Oxford.
OBJECT KINETIC MONTE CARLO SIMULATIONS OF RADIATION DAMAGE IN TUNGSTEN
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2015-04-16
We used our recently developed lattice-based object kinetic Monte Carlo code; KSOME [1] to carryout simulations of radiation damage in bulk tungsten at temperatures of 300, and 2050 K for various dose rates. Displacement cascades generated from molecular dynamics (MD) simulations for PKA energies at 60, 75 and 100 keV provided residual point defect distributions. It was found that the number density of vacancies in the simulation box does not change with dose rate while the number density of vacancy clusters slightly decreases with dose rate indicating that bigger clusters are formed at larger dose rates. At 300 K, although the average vacancy cluster size increases slightly, the vast majority of vacancies exist as mono-vacancies. At 2050 K no accumulation of defects was observed during irradiation over a wide range of dose rates for all PKA energies studied in this work.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Kinetic Monte Carlo simulation of the classical nucleation process
NASA Astrophysics Data System (ADS)
Filipponi, A.; Giammatteo, P.
2016-12-01
We implemented a kinetic Monte Carlo computer simulation of the nucleation process in the framework of the coarse grained scenario of the Classical Nucleation Theory (CNT). The computational approach is efficient for a wide range of temperatures and sample sizes and provides a reliable simulation of the stochastic process. The results for the nucleation rate are in agreement with the CNT predictions based on the stationary solution of the set of differential equations for the continuous variables representing the average population distribution of nuclei size. Time dependent nucleation behavior can also be simulated with results in agreement with previous approaches. The method, here established for the case in which the excess free-energy of a crystalline nucleus is a smooth-function of the size, can be particularly useful when more complex descriptions are required.
Studying Soft Materials with Soft Potentials -- Fast Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Zong, Jing; Zhang, Xinghua; Zhang, Pengfei; Yin, Yuhua; Li, Baohui; Wang, Qiang
2010-03-01
The basic idea of fast Monte Carlo (FMC) simulationsfootnotetextQ. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009); Q. Wang, Soft Matter, 5, 4564 (2009). is to use soft potentials that allow particle overlapping, instead of hard repulsions (e.g., the Lennard-Jones potential in continuum or the self- and mutual-avoiding walks on a lattice) used in conventional molecular simulations. This gives orders of magnitude faster/better sampling of configurational space. In addition, since soft potentials are commonly used in polymer field theories, using the same Hamiltonian in both FMC simulations and the theories thus allow stringent test of the latter, without any parameter-fitting, to unambiguously and quantitatively reveal the consequences of theoretical approximations. Here we use several systems, ranging from small-molecule liquid crystals to homopolymer solutions and brushes, to demonstrate these great advantages of FMC simulations performed both in continuum and on a lattice.
Tennant, Marc; Kruger, Estie
2013-02-01
This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT<>0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated.
Exact special twist method for quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Dagrada, Mario; Karakuzu, Seher; Vildosola, Verónica Laura; Casula, Michele; Sorella, Sandro
2016-12-01
We present a systematic investigation of the special twist method introduced by Rajagopal et al. [Phys. Rev. B 51, 10591 (1995), 10.1103/PhysRevB.51.10591] for reducing finite-size effects in correlated calculations of periodic extended systems with Coulomb interactions and Fermi statistics. We propose a procedure for finding special twist values which, at variance with previous applications of this method, reproduce the energy of the mean-field infinite-size limit solution within an adjustable (arbitrarily small) numerical error. This choice of the special twist is shown to be the most accurate single-twist solution for curing one-body finite-size effects in correlated calculations. For these reasons we dubbed our procedure "exact special twist" (EST). EST only needs a fully converged independent-particles or mean-field calculation within the primitive cell and a simple fit to find the special twist along a specific direction in the Brillouin zone. We first assess the performances of EST in a simple correlated model such as the three-dimensional electron gas. Afterwards, we test its efficiency within ab initio quantum Monte Carlo simulations of metallic elements of increasing complexity. We show that EST displays an overall good performance in reducing finite-size errors comparable to the widely used twist average technique but at a much lower computational cost since it involves the evaluation of just one wave function. We also demonstrate that the EST method shows similar performances in the calculation of correlation functions, such as the ionic forces for structural relaxation and the pair radial distribution function in liquid hydrogen. Our conclusions point to the usefulness of EST for correlated supercell calculations; our method will be particularly relevant when the physical problem under consideration requires large periodic cells.
NASA Astrophysics Data System (ADS)
Gu, J.; Bednarz, B.; Caracappa, P. F.; Xu, X. G.
2009-05-01
The latest multiple-detector technologies have further increased the popularity of x-ray CT as a diagnostic imaging modality. There is a continuing need to assess the potential radiation risk associated with such rapidly evolving multi-detector CT (MDCT) modalities and scanning protocols. This need can be met by the use of CT source models that are integrated with patient computational phantoms for organ dose calculations. Based on this purpose, this work developed and validated an MDCT scanner using the Monte Carlo method, and meanwhile the pregnant patient phantoms were integrated into the MDCT scanner model for assessment of the dose to the fetus as well as doses to the organs or tissues of the pregnant patient phantom. A Monte Carlo code, MCNPX, was used to simulate the x-ray source including the energy spectrum, filter and scan trajectory. Detailed CT scanner components were specified using an iterative trial-and-error procedure for a GE LightSpeed CT scanner. The scanner model was validated by comparing simulated results against measured CTDI values and dose profiles reported in the literature. The source movement along the helical trajectory was simulated using the pitch of 0.9375 and 1.375, respectively. The validated scanner model was then integrated with phantoms of a pregnant patient in three different gestational periods to calculate organ doses. It was found that the dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. The paper also discusses how these fetal dose values can be used to evaluate imaging procedures and to assess risk using recommendations of the report from AAPM Task Group 36. This work demonstrates the ability of modeling and validating an MDCT scanner by the Monte Carlo method, as well as
Gu, J; Bednarz, B; Caracappa, P F; Xu, X G
2009-05-07
The latest multiple-detector technologies have further increased the popularity of x-ray CT as a diagnostic imaging modality. There is a continuing need to assess the potential radiation risk associated with such rapidly evolving multi-detector CT (MDCT) modalities and scanning protocols. This need can be met by the use of CT source models that are integrated with patient computational phantoms for organ dose calculations. Based on this purpose, this work developed and validated an MDCT scanner using the Monte Carlo method, and meanwhile the pregnant patient phantoms were integrated into the MDCT scanner model for assessment of the dose to the fetus as well as doses to the organs or tissues of the pregnant patient phantom. A Monte Carlo code, MCNPX, was used to simulate the x-ray source including the energy spectrum, filter and scan trajectory. Detailed CT scanner components were specified using an iterative trial-and-error procedure for a GE LightSpeed CT scanner. The scanner model was validated by comparing simulated results against measured CTDI values and dose profiles reported in the literature. The source movement along the helical trajectory was simulated using the pitch of 0.9375 and 1.375, respectively. The validated scanner model was then integrated with phantoms of a pregnant patient in three different gestational periods to calculate organ doses. It was found that the dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. The paper also discusses how these fetal dose values can be used to evaluate imaging procedures and to assess risk using recommendations of the report from AAPM Task Group 36. This work demonstrates the ability of modeling and validating an MDCT scanner by the Monte Carlo method, as well as
Monte Carlo simulations and dosimetric studies of an irradiation facility
NASA Astrophysics Data System (ADS)
Belchior, A.; Botelho, M. L.; Vaz, P.
2007-09-01
There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.
NASA Astrophysics Data System (ADS)
Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi
1994-01-01
The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
Multidimensional master equation and its Monte-Carlo simulation.
Pang, Juan; Bai, Zhan-Wu; Bao, Jing-Dong
2013-02-28
We derive an integral form of multidimensional master equation for a markovian process, in which the transition function is obtained in terms of a set of discrete Langevin equations. The solution of master equation, namely, the probability density function is calculated by using the Monte-Carlo composite sampling method. In comparison with the usual Langevin-trajectory simulation, the present approach decreases effectively coarse-grained error. We apply the master equation to investigate time-dependent barrier escape rate of a particle from a two-dimensional metastable potential and show the advantage of this approach in the calculations of quantities that depend on the probability density function.
Monte Carlo simulation of vibrational relaxation in nitrogen