Monte Carlo simulation models of breeding-population advancement.
J.N. King; G.R. Johnson
1993-01-01
Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...
Kinetic plasma modeling with quiet Monte Carlo direct simulation.
Albright, B. J.; Jones, M. E.; Lemons, D. S.; Winske, D.
2001-01-01
The modeling of collisions among particles in space plasma media poses a challenge for computer simulation. Traditional plasma methods are able to model well the extremes of highly collisional plasmas (MHD and Hall-MHD simulations) and collisionless plasmas (particle-in-cell simulations). However, neither is capable of trealing the intermediate, semi-collisional regime. The authors have invented a new approach to particle simulation called Quiet Monte Carlo Direct Simulation (QMCDS) that can, in principle, treat plasmas with arbitrary and arbitrarily varying collisionality. The QMCDS method will be described, and applications of the QMCDS method as 'proof of principle' to diffusion, hydrodynamics, and radiation transport will be presented. Of particular interest to the space plasma simulation community is the application of QMCDS to kinetic plasma modeling. A method for QMCDS simulation of kinetic plasmas will be outlined, and preliminary results of simulations in the limit of weak pitch-angle scattering will be presented.
A generalized hard-sphere model for Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Hassan, H. A.; Hash, David B.
1993-01-01
A new molecular model, called the generalized hard-sphere, or GHS model, is introduced. This model contains, as a special case, the variable hard-sphere model of Bird (1981) and is capable of reproducing all of the analytic viscosity coefficients available in the literature that are derived for a variety of interaction potentials incorporating attraction and repulsion. In addition, a new procedure for determining interaction potentials in a gas mixture is outlined. Expressions needed for implementing the new model in the direct simulation Monte Carlo methods are derived. This development makes it possible to employ interaction models that have the same level of complexity as used in Navier-Stokes calculations.
Drag coefficient modeling for grace using Direct Simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; McLaughlin, Craig A.; Sutton, Eric K.
2013-12-01
Drag coefficient is a major source of uncertainty in predicting the orbit of a satellite in low Earth orbit (LEO). Computational methods like the Test Particle Monte Carlo (TPMC) and Direct Simulation Monte Carlo (DSMC) are important tools in accurately computing physical drag coefficients. However, the methods are computationally expensive and cannot be employed real time. Therefore, modeling of the physical drag coefficient is required. This work presents a technique of developing parameterized drag coefficients models using the DSMC method. The technique is validated by developing a model for the Gravity Recovery and Climate Experiment (GRACE) satellite. Results show that drag coefficients computed using the developed model for GRACE agree to within 1% with those computed using DSMC.
Monte Carlo simulation of photon scattering in biological tissue models.
Kumar, D; Chacko, S; Singh, M
1999-10-01
Monte Carlo simulation of photon scattering, with and without abnormal tissue placed at various locations in the rectangular, semi-circular and semi-elliptical tissue models, has been carried out. The absorption coefficient of the tissue considered as abnormal is high and its scattering coefficient low compared to that of the control tissue. The placement of the abnormality at various locations within the models affects the transmission and surface emission of photons at various locations. The scattered photons originating from deeper layers make the maximum contribution at farther distances from the beam entry point. The contribution of various layers to photon scattering provides valuable data on variability of internal composition. Introduction.
Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models
NASA Astrophysics Data System (ADS)
Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti
2016-10-01
A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.
Monte Carlo modelling of Schottky diode for rectenna simulation
NASA Astrophysics Data System (ADS)
Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.
2017-09-01
Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.
Monte Carlo simulations of lattice models for single polymer systems
Hsu, Hsiao-Ping
2014-10-28
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N∼O(10{sup 4}). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and √(10), we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Optimising muscle parameters in musculoskeletal models using Monte Carlo simulation.
Reed, Erik B; Hanson, Andrea M; Cavanagh, Peter R
2015-01-01
The use of musculoskeletal simulation software has become a useful tool for modelling joint and muscle forces during human activity, including in reduced gravity because direct experimentation is difficult. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler™ (San Clemente, CA, USA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces but no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. The rectus femoris was predicted to peak at 60.1% activation in the same test case compared to 19.2% activation using default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Hopping electron model with geometrical frustration: kinetic Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Terao, Takamichi
2016-09-01
The hopping electron model on the Kagome lattice was investigated by kinetic Monte Carlo simulations, and the non-equilibrium nature of the system was studied. We have numerically confirmed that aging phenomena are present in the autocorrelation function C ({t,tW )} of the electron system on the Kagome lattice, which is a geometrically frustrated lattice without any disorder. The waiting-time distributions p(τ ) of hopping electrons of the system on Kagome lattice has been also studied. It is confirmed that the profile of p (τ ) obtained at lower temperatures obeys the power-law behavior, which is a characteristic feature of continuous time random walk of electrons. These features were also compared with the characteristics of the Coulomb glass model, used as a model of disordered thin films and doped semiconductors. This work represents an advance in the understanding of the dynamics of geometrically frustrated systems and will serve as a basis for further studies of these physical systems.
Modeling low-coherence enhanced backscattering using Monte Carlo simulation.
Subramanian, Hariharan; Pradhan, Prabhakar; Kim, Young L; Liu, Yang; Li, Xu; Backman, Vadim
2006-08-20
Constructive interference between coherent waves traveling time-reversed paths in a random medium gives rise to the enhancement of light scattering observed in directions close to backscattering. This phenomenon is known as enhanced backscattering (EBS). According to diffusion theory, the angular width of an EBS cone is proportional to the ratio of the wavelength of light lambda to the transport mean-free-path length l(s)* of a random medium. In biological media a large l(s)* approximately 0.5-2 mm > lambda results in an extremely small (approximately 0.001 degrees ) angular width of the EBS cone, making the experimental observation of such narrow peaks difficult. Recently, the feasibility of observing EBS under low spatial coherence illumination (spatial coherence length Lsc < l(s)*) was demonstrated. Low spatial coherence behaves as a spatial filter rejecting longer path lengths and thus resulting in an increase of more than 100 times in the angular width of low coherence EBS (LEBS) cones. However, a conventional diffusion approximation-based model of EBS has not been able to explain such a dramatic increase in LEBS width. We present a photon random walk model of LEBS by using Monte Carlo simulation to elucidate the mechanism accounting for the unprecedented broadening of the LEBS peaks. Typically, the exit angles of the scattered photons are not considered in modeling EBS in the diffusion regime. We show that small exit angles are highly sensitive to low-order scattering, which is crucial for accurate modeling of LEBS. Our results show that the predictions of the model are in excellent agreement with the experimental data.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Modeling focusing Gaussian beams in a turbid medium with Monte Carlo simulations.
Hokr, Brett H; Bixler, Joel N; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J; Yakovlev, Vladislav V; Scully, Marlan O
2015-04-06
Monte Carlo techniques are the gold standard for studying light propagation in turbid media. Traditional Monte Carlo techniques are unable to include wave effects, such as diffraction; thus, these methods are unsuitable for exploring focusing geometries where a significant ballistic component remains at the focal plane. Here, a method is presented for accurately simulating photon propagation at the focal plane, in the context of a traditional Monte Carlo simulation. This is accomplished by propagating ballistic photons along trajectories predicted by Gaussian optics until they undergo an initial scattering event, after which, they are propagated through the medium by a traditional Monte Carlo technique. Solving a known problem by building upon an existing Monte Carlo implementation allows this method to be easily implemented in a wide variety of existing Monte Carlo simulations, greatly improving the accuracy of those models for studying dynamics in a focusing geometry.
Modeling root-reinforcement with a Fiber-Bundle Model and Monte Carlo simulation
USDA-ARS?s Scientific Manuscript database
This paper uses sensitivity analysis and a Fiber-Bundle Model (FBM) to examine assumptions underpinning root-reinforcement models. First, different methods for apportioning load between intact roots were investigated. Second, a Monte Carlo approach was used to simulate plants with heartroot, platero...
Monte Carlo simulation of classical spin models with chaotic billiards.
Suzuki, Hideyuki
2013-11-01
It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.
Monte Carlo simulation of classical spin models with chaotic billiards
NASA Astrophysics Data System (ADS)
Suzuki, Hideyuki
2013-11-01
It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.
NASA Astrophysics Data System (ADS)
Nizenkov, P.; Pfeiffer, M.; Mirza, A.; Fasoulas, S.
2017-07-01
For the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan, the chemistry modeling of polyatomic molecules is implemented in the direct simulation Monte Carlo method within the reactive plasma flow solver PICLas. An additional reaction condition as well as the consideration of the vibrational degrees of freedom is described in the context of the total collision energy model. The treatment of reverse exchange and recombination reactions is discussed, where the low temperature exponent of the Arrhenius fit for methane dissociation limited the calculation of the reaction probability at relevant temperatures. An alternative method based on the equilibrium constant is devised. The post-reaction energy redistribution is performed under the assumption of equipartition of the remaining collisional energy. The implementation is verified for several reaction paths with simple reservoir simulations. Finally, the feasibility of the new chemistry model is demonstrated by a simulation of a trajectory point of Huygens probe at Titan.
Analytical positron range modelling in heterogeneous media for PET Monte Carlo simulation.
Lehnert, Wencke; Gregoire, Marie-Claude; Reilhac, Anthonin; Meikle, Steven R
2011-06-07
Monte Carlo simulation codes that model positron interactions along their tortuous path are expected to be accurate but are usually slow. A simpler and potentially faster approach is to model positron range from analytical annihilation density distributions. The aims of this paper were to efficiently implement and validate such a method, with the addition of medium heterogeneity representing a further challenge. The analytical positron range model was evaluated by comparing annihilation density distributions with those produced by the Monte Carlo simulator GATE and by quantitatively analysing the final reconstructed images of Monte Carlo simulated data. In addition, the influence of positronium formation on positron range and hence on the performance of Monte Carlo simulation was investigated. The results demonstrate that 1D annihilation density distributions for different isotope-media combinations can be fitted with Gaussian functions and hence be described by simple look-up-tables of fitting coefficients. Together with the method developed for simulating positron range in heterogeneous media, this allows for efficient modelling of positron range in Monte Carlo simulation. The level of agreement of the analytical model with GATE depends somewhat on the simulated scanner and the particular research task, but appears to be suitable for lower energy positron emitters, such as (18)F or (11)C. No reliable conclusion about the influence of positronium formation on positron range and simulation accuracy could be drawn.
A new Monte Carlo simulation model for laser transmission in smokescreen based on MATLAB
NASA Astrophysics Data System (ADS)
Lee, Heming; Wang, Qianqian; Shan, Bin; Li, Xiaoyang; Gong, Yong; Zhao, Jing; Peng, Zhong
2016-11-01
A new Monte Carlo simulation model of laser transmission in smokescreen is promoted in this paper. In the traditional Monte Carlo simulation model, the radius of particles is set at the same value and the initial cosine value of photons direction is fixed also, which can only get the approximate result. The new model is achieved based on MATLAB and can simulate laser transmittance in smokescreen with different sizes of particles, and the output result of the model is close to the real scenarios. In order to alleviate the influence of the laser divergence while traveling in the air, we changed the initial direction cosine of photons on the basis of the traditional Monte Carlo model. The mixed radius particle smoke simulation results agree with the measured transmittance under the same experimental conditions with 5.42% error rate.
NASA Astrophysics Data System (ADS)
Afanasiev, A. N.; Vainio, R. O.; Palmroth, M.; Pfau-Kempf, Y.; Ganse, U.; Battarbee, M.
2016-12-01
Quasi-parallel astrophysical shocks are considered to develop the so-called foreshock regions featuring enhanced levels of plasma turbulence. Foreshock plays a key role in the concept of diffusive shock acceleration (DSA) mechanism of ion acceleration in shocks. There have been several simulation models addressing particle acceleration/energization coupled with foreshock evolution. One of those is the self-consistent Monte Carlo simulation model, which is built on the quasi-linear theory of ion interactions with Alfvén waves. This model has been applied to simulate ion acceleration in coronal and interplanetary shocks. A more fundamental plasma simulation model, which can be used to study the same processes, is the hybrid-Vlasov (kinetic ions, fluid electrons) approach. The latter is utilized by the Vlasiator code simulating the near-Earth global plasma environment. In this work, we employ both models in application to the Earth's ULF foreshock with the aim to better understand limitations of quasi-linear modeling of foreshock development/ion acceleration. Our study shows that the models are consistent with each other in terms of the dominant wave polarization and the shape of the power spectrum of magnetic fluctuations. The work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637324 (HESPERIA).
Modeling of hysteresis loops by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Nehme, Z.; Labaye, Y.; Sayed Hassan, R.; Yaacoub, N.; Greneche, J. M.
2015-12-01
Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nano)architectures with different anisotropy contributions.
Monte Carlo simulations of Landau-Ginzburg model for membranes
NASA Astrophysics Data System (ADS)
Koibuchi, Hiroshi; Shobukhov, Andrey
2014-02-01
The Landau-Ginzburg (LG) model for membranes is numerically studied on triangulated spheres in R3. The LG model is in sharp contrast to the model of Helfrich-Polyakov (HP). The reason for this difference is that the curvature energy of the LG (HP) Hamiltonian is defined by means of the tangential (normal) vector of the surface. For this reason, the curvature energy of the LG model includes the in-plane bending or shear energy component, which is not included in the curvature energy of the HP model. From the simulation data, we find that the LG model undergoes a first-order collapse transition. The results of the LG model in the higher-dimensional spaces Rd(d > 3) and on the self-avoiding (SA) surfaces in R3 are presented and discussed. We also study the David-Guitter (DG) model, which is a variant of the LG model, and find that the DG model undergoes a first-order transition. It is also found that the transition can be observed only on the homogeneous surfaces, which are composed of almost uniform triangles according to the condition that the induced metric ∂ar ṡ ∂br is close to δab.
ERIC Educational Resources Information Center
Kim, Su-Young
2012-01-01
Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…
Monte Carlo Simulations of Compressible Ising Models: Do We Understand Them?
NASA Astrophysics Data System (ADS)
Landau, D. P.; Dünweg, B.; Laradji, M.; Tavazza, F.; Adler, J.; Cannavaccioulo, L.; Zhu, X.
Extensive Monte Carlo simulations have begun to shed light on our understanding of phase transitions and universality classes for compressible Ising models. A comprehensive analysis of a Landau-Ginsburg-Wilson hamiltonian for systems with elastic degrees of freedom resulted in the prediction that there should be four distinct cases that would have different behavior, depending upon symmetries and thermodynamic constraints. We shall provide an account of the results of careful Monte Carlo simulations for a simple compressible Ising model that can be suitably modified so as to replicate all four cases.
Inclusion of coherence in Monte Carlo models for simulation of x-ray phase contrast imaging.
Cipiccia, Silvia; Vittoria, Fabio A; Weikum, Maria; Olivo, Alessandro; Jaroszynski, Dino A
2014-09-22
Interest in phase contrast imaging methods based on electromagnetic wave coherence has increased significantly recently, particularly at X-ray energies. This is giving rise to a demand for effective simulation methods. Coherent imaging approaches are usually based on wave optics, which require significant computational resources, particularly for producing 2D images. Monte Carlo (MC) methods, used to track individual particles/photons for particle physics, are not considered appropriate for describing coherence effects. Previous preliminary work has evaluated the possibility of incorporating coherence in Monte Carlo codes. However, in this paper, we present the implementation of refraction in a model that is based on time of flight calculations and the Huygens-Fresnel principle, which allow reproducing the formation of phase contrast images in partially and fully coherent experimental conditions. The model is implemented in the FLUKA Monte Carlo code and X-ray phase contrast imaging simulations are compared with experiments and wave optics calculations.
Accelerated Monte Carlo models to simulate fluorescence spectra from layered tissues.
Swartling, Johannes; Pifferi, Antonio; Enejder, Annika M K; Andersson-Engels, Stefan
2003-04-01
Two efficient Monte Carlo models are described, facilitating predictions of complete time-resolved fluorescence spectra from a light-scattering and light-absorbing medium. These are compared with a third, conventional fluorescence Monte Carlo model in terms of accuracy, signal-to-noise statistics, and simulation time. The improved computation efficiency is achieved by means of a convolution technique, justified by the symmetry of the problem. Furthermore, the reciprocity principle for photon paths, employed in one of the accelerated models, is shown to simplify the computations of the distribution of the emitted fluorescence drastically. A so-called white Monte Carlo approach is finally suggested for efficient simulations of one excitation wavelength combined with a wide range of emission wavelengths. The fluorescence is simulated in a purely scattering medium, and the absorption properties are instead taken into account analytically afterward. This approach is applicable to the conventional model as well as to the two accelerated models. Essentially the same absolute values for the fluorescence integrated over the emitting surface and time are obtained for the three models within the accuracy of the simulations. The time-resolved and spatially resolved fluorescence exhibits a slight overestimation at short delay times close to the source corresponding to approximately two grid elements for the accelerated models, as a result of the discretization and the convolution. The improved efficiency is most prominent for the reverse-emission accelerated model, for which the simulation time can be reduced by up to two orders of magnitude.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Fast Monte Carlo-simulator with full collimator and detector response modelling for SPECT.
Sohlberg, Antti O; Kajaste, Markus T
2012-01-01
Monte Carlo (MC)-simulations have proved to be a valuable tool in studying SPECT-reconstruction algorithms. Despite their popularity, the use of Monte Carlo-simulations is still often limited by their large computation demand. This is especially true in situations where full collimator and detector modelling with septal penetration, scatter and X-ray fluorescence needs to be included. This paper presents a rapid and simple MC-simulator, which can effectively reduce the computation times. The simulator was built on the convolution-based forced detection principle, which can markedly lower the number of simulated photons. Full collimator and detector response look-up tables are pre-simulated and then later used in the actual MC-simulations to model the system response. The developed simulator was validated by comparing it against (123)I point source measurements made with a clinical gamma camera system and against (99m)Tc software phantom simulations made with the SIMIND MC-package. The results showed good agreement between the new simulator, measurements and the SIMIND-package. The new simulator provided near noise-free projection data in approximately 1.5 min per projection with (99m)Tc, which was less than one-tenth of SIMIND's time. The developed MC-simulator can markedly decrease the simulation time without sacrificing image quality.
Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models
NASA Astrophysics Data System (ADS)
Si, Lisha; Liao, Xiaoyun; Zhou, Nengji
2016-12-01
With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
NASA Astrophysics Data System (ADS)
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
Monte Carlo simulation based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, Rizal; Waris, Abdul; Viridi, Sparisoma
2016-09-01
Nuclear fission has been modeled notoriously using two approaches method, macroscopic and microscopic. This work will propose another approach, where the nucleus is treated as a toy model. The aim is to see the usefulness of particle distribution in fission yield calculation. Inasmuch nucleus is a toy, then the Fission Toy Model (FTM) does not represent real process in nature completely. The fission event in FTM is represented by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. By adopting the nucleon density approximation, the Gaussian distribution is chosen as particle distribution. This distribution function generates random number that randomizes distance between particles and a central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. The yield is determined from portion of nuclei distribution which is proportional with portion of mass numbers. By using modified FTM, characteristic of particle distribution in each fission event could be formed before fission process. These characteristics could be used to make prediction about real nucleons interaction in fission process. The results of FTM calculation give information that the γ value seems as energy.
Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave
NASA Astrophysics Data System (ADS)
Yasuda, Shugo
2017-02-01
A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
NASA Astrophysics Data System (ADS)
Erdem, Riza; Aydiner, Ekrem
2009-03-01
Voltage-gated ion channels are key molecules for the generation and propagation of electrical signals in excitable cell membranes. The voltage-dependent switching of these channels between conducting and nonconducting states is a major factor in controlling the transmembrane voltage. In this study, a statistical mechanics model of these molecules has been discussed on the basis of a two-dimensional spin model. A new Hamiltonian and a new Monte Carlo simulation algorithm are introduced to simulate such a model. It was shown that the results well match the experimental data obtained from batrachotoxin-modified sodium channels in the squid giant axon using the cut-open axon technique.
Quantum Monte Carlo simulation of a two-dimensional Majorana lattice model
NASA Astrophysics Data System (ADS)
Hayata, Tomoya; Yamamoto, Arata
2017-07-01
We study interacting Majorana fermions in two dimensions as a low-energy effective model of a vortex lattice in two-dimensional time-reversal-invariant topological superconductors. For that purpose, we implement ab initio quantum Monte Carlo simulation to the Majorana fermion system in which the path-integral measure is given by a semipositive Pfaffian. We discuss spontaneous breaking of time-reversal symmetry at finite temperatures.
NASA Astrophysics Data System (ADS)
Flicstein, Jean; Pata, S.; Chun, L. S. H. K.; Palmier, Jean F.; Courant, J. L.
1998-05-01
A model for ultraviolet induced chemical vapor deposition (UV CVD) for a-SiN:H is described. In the simulation of UV CVD process, activate charged centers creation, species incorporation, surface diffusion, and desorption are considered as elementary steps for the photonucleation and photodeposition mechanisms. The process is characterized by two surface sticking coefficients. Surface diffusion of species is modeled with a gaussian distribution. A real time Monte Carlo method is used to determine photonucleation and photodeposition rates in nanostructures. Comparison of experimental versus simulation results for a-SiN:H is shown to predict the morphology temporal evolution under operating conditions down to atomistic resolution.
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
Multicanonical Monte Carlo simulations of anisotropic SU(3) and SU(4) Heisenberg models
NASA Astrophysics Data System (ADS)
Harada, Kenji; Kawashima, Naoki; Troyer, Matthias
2009-03-01
We present the results of multicanonical Monte Carlo simulations on two-dimensional anisotropic SU(3) and SU(4) Heisenberg models. In our previous study [K. Harada, et al., J. Phys. Soc. Jpn. 76, 013703 (2007)], we found evidence for a direct quantum phase transition from the valence-bond-solid(VBS) phase to the SU(3) symmetry breaking phase on the SU(3) model and we proposed the possibility of deconfined critical phenomena (DCP) [T. Senthil, et al., Science 303, 1490 (2004); T. Grover and T. Senthil, Phys. Rev. Lett. 98, 247202 (2007)]. Here we will present new results with an improved algorithm, using a multicanonical Monte Carlo algorithm. Using a flow method-like technique [A.B. Kuklov, et al., Annals of Physics 321, 1602 (2006)], we discuss the possibility of DCP in both models.
NASA Astrophysics Data System (ADS)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-01
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Monte Carlo simulation of Prussian blue analogs described by Heisenberg ternary alloy model
NASA Astrophysics Data System (ADS)
Yüksel, Yusuf
2015-11-01
Within the framework of Monte Carlo simulation technique, we simulate magnetic behavior of Prussian blue analogs based on Heisenberg ternary alloy model. We present phase diagrams in various parameter spaces, and we compare some of our results with those based on Ising counterparts. We clarify the variations of transition temperature and compensation phenomenon with mixing ratio of magnetic ions, exchange interactions, and exchange anisotropy in the present ferro-ferrimagnetic Heisenberg system. According to our results, thermal variation of the total magnetization curves may exhibit N, L, P, Q, R type behaviors based on the Néel classification scheme.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
NASA Astrophysics Data System (ADS)
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Webster, C.
2014-12-01
The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.
Modeling of near-continuum flows using the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Lohn, P. D.; Haflinger, D. E.; McGregor, R. D.; Behrens, H. W.
1990-06-01
The direct simulation Monte Carlo (DSMC) method is used to model the flow of a hypersonic stream about a wedge. The Knudsen number of 0.00075 puts the flow into the continuum category and hence is a challenge for the DSMC method. The modeled flowfield is shown to agree extremely well with the experimental measurements in the wedge wake taken by Batt (1967). This experimental confirmation serves as a rigorous validation of the DSMC method and provides guidelines for computations of near-continuum flows.
Arterberry, Martha E; Bornstein, Marc H; Haynes, O Maurice
2011-04-01
Two analytical procedures for identifying young children as categorizers, the Monte Carlo Simulation and the Probability Estimate Model, were compared. Using a sequential touching method, children aged 12, 18, 24, and 30 months were given seven object sets representing different levels of categorical classification. From their touching performance, the probability that children were categorizing was then determined independently using Monte Carlo Simulation and the Probability Estimate Model. The two analytical procedures resulted in different percentages of children being classified as categorizers. Results using the Monte Carlo Simulation were more consistent with group-level analyses than results using the Probability Estimate Model. These findings recommend using the Monte Carlo Simulation for determining individual categorizer classification. Copyright © 2011 Elsevier Inc. All rights reserved.
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
NASA Astrophysics Data System (ADS)
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.
AO modelling for wide-field E-ELT instrumentation using Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Basden, Alastair; Morris, Simon; Morris, Tim; Myers, Richard
2014-08-01
Extensive simulations of AO performance for several E-ELT instruments (including EAGLE, MOSAIC, HIRES and MAORY) have been ongoing using the Monte-Carlo Durham AO Simulation Package. We present the latest simulation results, including studies into DM requirements, dependencies of performance on asterism, detailed point spread function generation, accurate telescope modelling, and studies of laser guide star effects. Details of simulations will be given, including the use of optical models of the E-ELT to generate wave- front sensor pupil illumination functions, laser guide star modelling, and investigations of different many-layer atmospheric profiles. We discuss issues related to ELT-scale simulation, how we have overcome these, and how we will be approaching forthcoming issues such as modelling of advanced wavefront control, multi-rate wavefront sensing, and advanced treatment of extended laser guide star spots. We also present progress made on integrating simulation with AO real-time control systems. The impact of simulation outcomes on instrument design studies will be discussed, and the ongoing work plan presented.
Cheong, Daniel W; Panagiotopoulos, Athanassios Z
2006-04-25
A lattice model for ionic surfactants with explicit counterions is proposed for which the micellization behavior can be accurately determined from grand canonical Monte Carlo simulations. The model is characterized by a few parameters that can be adjusted to represent various linear surfactants with ionic headgroups. The model parameters have a clear physical interpretation and can be obtained from experimental data unrelated to micellization, namely, geometric information and solubilities of tail segments. As a specific example, parameter values for sodium dodecyl sulfate were obtained by optimizing for the solubility of hydrocarbons in water and the structural properties of dodecane. The critical micelle concentration (cmc), average aggregation number, degree of counterion binding, and their dependence on temperature were determined from histogram reweighting grand canonical Monte Carlo simulations and were compared to experimental results. The model gives the correct trend and order of magnitude for all quantities but underpredicts the cmc and aggregation number. We suggest ways to modify the model that may improve agreement with experimental values.
Fast Off-Lattice Monte Carlo Simulations with a Novel Soft-Core Spherocylinder Model
NASA Astrophysics Data System (ADS)
Zong, Jing; Zhang, Xinghua; Wang, Qiang (David)
2011-03-01
Fast off-lattice Monte Carlo simulations with soft-core repulsive potentials that allow particle overlapping give orders of magnitude faster/better sampling of the configurational space than conventional molecular simulations with hard-core repulsions (such as in the Lennard-Jones potential). Here we present our fast off-lattice Monte Carlo simulations on the structures and phase transitions of liquid crystals and rod-coil diblock copolymers based on a novel and computationally efficient anisotropic soft-core potential that gives exact treatment of the excluded-volume interactions between two spherocylinders (thus the orientational interaction between them favoring their parallel alignment). Our model further takes into account the degree of overlap of two spherocylinders, thus superior to other soft-core models that depend only on their shortest distance. It has great potential applications in the study of liquid crystals, block copolymers containing rod blocks, and liquid crystalline polymers. Q. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009).
A geometrical model for the Monte Carlo simulation of the TrueBeam linac.
Rodriguez, M; Sempau, J; Fogliata, A; Cozzi, L; Sauerwein, W; Brualla, L
2015-06-07
Monte Carlo simulation of linear accelerators (linacs) depends on the accurate geometrical description of the linac head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files of the flattening-filter-free (FFF) beams tallied at a plane located just upstream of the jaws. Yet, Monte Carlo simulations based on third-party tallied phase spaces are subject to limitations. In this work, an experimentally based geometry developed for the simulation of the FFF beams of the Varian TrueBeam linac is presented. The Monte Carlo geometrical model of the TrueBeam linac uses information provided by Varian that reveals large similarities between the TrueBeam machine and the Clinac 2100 downstream of the jaws. Thus, the upper part of the TrueBeam linac was modeled by introducing modifications to the Varian Clinac 2100 linac geometry. The most important of these modifications is the replacement of the standard flattening filters by ad hoc thin filters. These filters were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6 MV and 10 MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements performed with a diode detector for radiation fields ranging from 3 × 3 to 40 × 40 cm(2) at depths of maximum dose of 5 and 10 cm. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were performed for dose profiles obtained from Monte Carlo simulations using the phase-space files distributed by Varian for the TrueBeam linac as the sources of particles. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TrueBeam phase-space files for all fields and depths considered, except for
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah; Kurniadi, Rizal
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Fission yield calculation using toy model based on Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jubaidah, Kurniadi, Rizal
2015-09-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Monte Carlo simulations of the HP model (the "Ising model" of protein folding)
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Wüst, Thomas; Landau, David P.
2011-09-01
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
Monte Carlo simulations of the HP model (the "Ising model" of protein folding).
Li, Ying Wai; Wüst, Thomas; Landau, David P
2011-09-01
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
Macroion solutions in the cell model studied by field theory and Monte Carlo simulations.
Lue, Leo; Linse, Per
2011-12-14
Aqueous solutions of charged spherical macroions with variable dielectric permittivity and their associated counterions are examined within the cell model using a field theory and Monte Carlo simulations. The field theory is based on separation of fields into short- and long-wavelength terms, which are subjected to different statistical-mechanical treatments. The simulations were performed by using a new, accurate, and fast algorithm for numerical evaluation of the electrostatic polarization interaction. The field theory provides counterion distributions outside a macroion in good agreement with the simulation results over the full range from weak to strong electrostatic coupling. A low-dielectric macroion leads to a displacement of the counterions away from the macroion.
Surface-subsurface model for a dimer-dimer catalytic reaction: a Monte Carlo simulation study
NASA Astrophysics Data System (ADS)
Khan, K. M.; Albano, E. V.
2002-02-01
The surface-subsurface model for a dimer-dimer reaction of the type A2 + 2B2→2AB2 has been studied through Monte Carlo simulation via a model based on the lattice gas non-thermal Langmuir-Hinshelwood mechanism, which involves the precursor motion of the B2 molecule. The motion of precursors is considered on the surface as well as in the subsurface. The most interesting feature of this model is that it yields a steady reactive window, which is separated by continuous and discontinuous irreversible phase transitions. The phase diagram is qualitatively similar to the well known Ziff, Gulari and Barshad (ZGB) model. The width of the window depends upon the mobility of precursors. The continuous transition disappears when the mobility of the surface precursors is extended to the third-nearest neighbourhood. The dependence of production rate on partial pressure of B2 dimer is predicted by simple mathematical equations in our model.
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Moulin, F.; Picaud, S.; Hoang, P. N. M.; Jedlovszky, P.
2007-10-01
The grand canonical Monte Carlo method is used to simulate the adsorption isotherms of water molecules on different types of model soot particles. The soot particles are modeled by graphite-type layers arranged in an onionlike structure that contains randomly distributed hydrophilic sites, such as OH and COOH groups. The calculated water adsorption isotherm at 298K exhibits different characteristic shapes depending both on the type and the location of the hydrophilic sites and also on the size of the pores inside the soot particle. The different shapes of the adsorption isotherms result from different ways of water aggregation in or/and around the soot particle. The present results show the very weak influence of the OH sites on the water adsorption process when compared to the COOH sites. The results of these simulations can help in interpreting the experimental isotherms of water adsorbed on aircraft soot.
NASA Astrophysics Data System (ADS)
Hobler, Gerhard; Bradley, R. Mark; Urbassek, Herbert M.
2016-05-01
Sigmund's model of spatially resolved sputtering is the underpinning of many models of nanoscale pattern formation induced by ion bombardment. It is based on three assumptions: (i) the number of sputtered atoms is proportional to the nuclear energy deposition (NED) near the surface, (ii) the NED distribution is independent of the orientation and shape of the solid surface and is identical to the one in an infinite medium, and (iii) the NED distribution in an infinite medium can be approximated by a Gaussian. We test the validity of these assumptions using Monte Carlo simulations of He, Ar, and Xe impacts on Si at energies of 2, 20, and 200 keV with incidence angles from perpendicular to grazing. We find that for the more commonly-employed beam parameters (Ar and Xe ions at 2 and 20 keV and nongrazing incidence), the Sigmund model's predictions are within a factor of 2 of the Monte Carlo results for the total sputter yield and the first two moments of the spatially resolved sputter yield. This is partly due to a compensation of errors introduced by assumptions (i) and (ii). The Sigmund model, however, does not describe the skewness of the spatially resolved sputter yield, which is almost always significant. The approximation is much poorer for He ions and/or high energies (200 keV). All three of Sigmund's assumptions break down at grazing incidence angles. In all cases, we discuss the origin of the deviations from Sigmund's model.
Iterative optimisation of Monte Carlo detector models using measurements and simulations
NASA Astrophysics Data System (ADS)
Marzocchi, O.; Leone, D.
2015-04-01
This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the "try and fail" approach typical of the prior techniques.
NASA Astrophysics Data System (ADS)
Rothfischer, Ramona; Grosenick, Dirk; Macdonald, Rainer
2015-07-01
We discuss the determination of optical properties of thick scattering media from measurements of time-resolved transmittance by diffusion theory using Monte Carlo simulations as a gold standard to model photon migration. Our theoretical and experimental investigations reveal differences between calculated distributions of times of flight (DTOFs) of photons from both models which result in an overestimation of the absorption and the reduced scattering coefficient by diffusion theory which becomes larger for small scattering coefficients. By introducing a temporal shift in the DTOFs obtained with the diffusion model as additional fit parameter, the deviation in the absorption coefficient can be compensated almost completely. If the scattering medium is additionally covered by transparent layers (e.g. glass plates) the deviation between the DTOFs from both models is even larger which mainly effects the determination of the reduced scattering coefficient by diffusion theory. A temporal shift improves the accuracy of the optical properties derived by diffusion theory in this case as well.
A Monte Carlo Radiation Model for Simulating Rarefied Multiphase Plume Flows
2005-05-01
Paper 2005-0964, 2005. 9Farmer, J. T., and Howell, J. R., “ Monte Carlo Prediction of Radiative Heat Transfer in Inhomogeneous, Anisotropic...Spectroscopy and Radiative Transfer , Vol. 50, No. 5, 1993, pp. 511-530. 14Everson, J., and Nelson, H. F., “Development and Application of a Reverse Monte Carlo ...Engineering University of Michigan, Ann Arbor, MI 48109 A Monte Carlo ray trace radiation model is presented for the determination of radiative
NASA Astrophysics Data System (ADS)
Rajagopal, S.; Huntington, J. L.; Niswonger, R. G.; Reeves, M.; Pohll, G.
2012-12-01
Modeling complex hydrologic systems requires increasingly complex models to sufficiently describe the physical mechanisms observed in the domain. Streamflow in our study area is primarily driven by climate, reservoirs, and surface and groundwater interactions. Hence in this study, we are using the coupled surface and groundwater flow model, GSFLOW, to simulate streamflow in the Truckee River basin, Nevada and California. To characterize this hydrologic system the model domain is discretized into ~10,500 grid cells of 300m resolution for which a priori parameter estimates from observed climate, soils, geology, and well logs along with parameters that are default were derived. Due to the high dimensionality of the problem, it is important to quantify model uncertainty from multiple sources (parameter, climate input). In the current study, we adopt a stepwise approach to calibrate the model and to quantify the uncertainty in the simulation of different hydro-meteorological fluxes. This approach is preferred firstly due to the availability of multiple observations such as precipitation, solar radiation, snow depth and snow water equivalent, remotely sensed snow cover, and observed streamflow. Secondly, by focusing on individual modules and the parameters associated with simulating one process (e.g. solar radiation) we reduce the parameter search space which improves the robustness of the search algorithm in identifying the global minimum. The Differential Evolution Adaptive Metropolis (DREAM) algorithm, which is a Markov Chain Monte Carlo (MCMC) sampler, is applied to the GSFLOW model in this step wise approach to quantify meteorological input and parameter uncertainty. Results from this approach, posterior parameter distributions for model parameters, and model uncertainty is presented. This analysis will not only produce a robust model, but will also help model developers understand non-linear relationships between model parameters and simulated processes.
Yuan, Jiankui; Zheng, Yiran; Wessels, Barry; Lo, Simon S; Ellis, Rodney; Machtay, Mitchell; Yao, Min
2016-12-01
A virtual source model for Monte Carlo simulations of helical TomoTherapy has been developed previously by the authors. The purpose of this work is to perform experiments in an anthropomorphic (RANDO) phantom with the same order of complexity as in clinical treatments to validate the virtual source model to be used for quality assurance secondary check on TomoTherapy patient planning dose. Helical TomoTherapy involves complex delivery pattern with irregular beam apertures and couch movement during irradiation. Monte Carlo simulation, as the most accurate dose algorithm, is desirable in radiation dosimetry. Current Monte Carlo simulations for helical TomoTherapy adopt the full Monte Carlo model, which includes detailed modeling of individual machine component, and thus, large phase space files are required at different scoring planes. As an alternative approach, we developed a virtual source model without using the large phase space files for the patient dose calculations previously. In this work, we apply the simulation system to recompute the patient doses, which were generated by the treatment planning system in an anthropomorphic phantom to mimic the real patient treatments. We performed thermoluminescence dosimeter point dose and film measurements to compare with Monte Carlo results. Thermoluminescence dosimeter measurements show that the relative difference in both Monte Carlo and treatment planning system is within 3%, with the largest difference less than 5% for both the test plans. The film measurements demonstrated 85.7% and 98.4% passing rate using the 3 mm/3% acceptance criterion for the head and neck and lung cases, respectively. Over 95% passing rate is achieved if 4 mm/4% criterion is applied. For the dose-volume histograms, very good agreement is obtained between the Monte Carlo and treatment planning system method for both cases. The experimental results demonstrate that the virtual source model Monte Carlo system can be a viable option for the
Monte-Carlo simulations of a coarse-grained model for α-oligothiophenes
NASA Astrophysics Data System (ADS)
Almutairi, Amani; Luettmer-Strathmann, Jutta
The interfacial layer of an organic semiconductor in contact with a metal electrode has important effects on the performance of thin-film devices. However, the structure of this layer is not easy to model. Oligothiophenes are small, π-conjugated molecules with applications in organic electronics that also serve as small-molecule models for polythiophenes. α-hexithiophene (6T) is a six-ring molecule, whose adsorption on noble metal surfaces has been studied extensively (see, e.g., Ref.). In this work, we develop a coarse-grained model for α-oligothiophenes. We describe the molecules as linear chains of bonded, discotic particles with Gay-Berne potential interactions between non-bonded ellipsoids. We perform Monte Carlo simulations to study the structure of isolated and adsorbed molecules
NASA Astrophysics Data System (ADS)
Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard
2016-03-01
Laser speckle imaging (LSI) enables measurement of relative blood flow in microvasculature and perfusion in tissues. To determine the impact of tissue optical properties and perfusion dynamics on speckle contrast, we developed a computational simulation of laser speckle contrast imaging. We used a discrete absorption-weighted Monte Carlo simulation to model the transport of light in tissue. We simulated optical excitation of a uniform flat light source and tracked the momentum transfer of photons as they propagated through a simulated tissue geometry. With knowledge of the probability distribution of momentum transfer occurring in various layers of the tissue, we calculated the expected laser speckle contrast arising with coherent excitation using both reflectance and transmission geometries. We simulated light transport in a single homogeneous tissue while independently varying either absorption (.001-100mm^-1), reduced scattering (.1-10mm^-1), or anisotropy (0.05-0.99) over a range of values relevant to blood and commonly imaged tissues. We observed that contrast decreased by 49% with an increase in optical scattering, and observed a 130% increase with absorption (exposure time = 1ms). We also explored how speckle contrast was affected by the depth (0-1mm) and flow speed (0-10mm/s) of a dynamic vascular inclusion. This model of speckle contrast is important to increase our understanding of how parameters such as perfusion dynamics, vessel depth, and tissue optical properties affect laser speckle imaging.
NASA Astrophysics Data System (ADS)
Domingue, D. L.; Cheng, A. F.
1997-07-01
The reflectance properties of a scattering surface are governed by the surface porosity, the single scattering albedo of the particles composing the surface, the single scattering function of the particles, the relative percentage of different particle types (distinguished by differences in their single scattering albedo and single scattering function) within the surface, and the physical relationship between particles (i.e. is the surface layer randomly filled with material or is there a structure to the filling of the surface layer). We have developed a Monte-Carlo simulation of radiative transfer through a particulate surface layer where the above parameters are varied. Photometric phase curves are constructed to examine variations in brightness of the particulate surface as functions of photon incidence and emission angles, for various types of surface microscopic structure and particle scattering properties. The results are compared with the predictions from photometric theory (e.g., Hapke's model). The scattered photons are divided into two groups, singly scattered and multiply scattered, in order to examine the relative importance of multiple scattering as a function of single scattering albedo. The single vs multiple scattering results from the Monte-Carlo simulation are also compared to predictions from photometric theory.
Bisaso, Kuteesa R; Mukonzo, Jackson K; Ette, Ene I
2015-11-01
The study was undertaken to develop a pharmacokinetic-pharmacodynamic model to characterize efavirenz-induced neuropsychologic impairment, given preexistent impairment, which can be used for the optimization of efavirenz therapy via Monte Carlo simulations. The modeling was performed with NONMEM 7.2. A 1-compartment pharmacokinetic model was fitted to efavirenz concentration data from 196 Ugandan patients treated with a 600-mg daily efavirenz dose. Pharmacokinetic parameters and area under the curve (AUC) were derived. Neuropsychologic evaluation of the patients was done at baseline and in week 2 of antiretroviral therapy. A discrete-time 2-state first-order Markov model was developed to describe neuropsychologic impairment. Efavirenz AUC, day 3 efavirenz trough concentration, and female sex increased the probability (P01) of neuropsychologic impairment. Efavirenz oral clearance (CL/F) increased the probability (P10) of resolution of preexistent neuropsychologic impairment. The predictive performance of the reduced (final) model, given the data, incorporating AUC on P01and CL /F on P10, showed that the model adequately characterized the neuropsychologic impairment observed with efavirenz therapy. Simulations with the developed model predicted a 7% overall reduction in neuropsychologic impairment probability at 450 mg of efavirenz. We recommend a reduction in efavirenz dose from 600 to 450 mg, because the 450-mg dose has been shown to produce sustained antiretroviral efficacy.
NASA Astrophysics Data System (ADS)
Chen, Shaohua; Xu, Yaopengxiao; Jiao, Yang
2016-12-01
Microstructure control is an important subject in solid-state sintering and plays a crucial role in determining post-sintering material properties, such as strength, toughness and density, to name but a few. The preponderance of existing numerical sintering simulations model the morphology evolution and densification process driven by surface energy minimization by either dilating the particles to be sintered or using the vacancy annihilation model. Here, we develop a novel kinetic Monte Carlo model to model morphology evolution and densification during free sintering. Specifically, we derive analytically a heterogeneous densification rate of the sintering system by considering sintering stress induced mass transport. The densification of the system is achieved by modeling the sintering stress induced mass transfer via applying effective particle displacement and grain boundary migration with an efficient two-step iterative interfacial energy minimization procedure. Coarsening is also considered in the later stages of the simulations. We show that our model can accurately capture the diffusion-induced evolution of particle morphology, including neck formation and growth, as well as realistically reproduce the overall densification of the sintered material. The computationally obtained dynamic density evolution curves for both two-particle sintering and many-particle material sintering are found to be in excellent agreement with the corresponding experimental master sintering curves. Our model can be utilized to control a variety of structural and physical properties of the sintered materials, such as the pore size and final material density.
NASA Astrophysics Data System (ADS)
Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa
We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.
A review of Monte Carlo simulations for the Bose-Hubbard model with diagonal disorder
NASA Astrophysics Data System (ADS)
Pollet, Lode
2013-10-01
We review the physics of the Bose-Hubbard model with disorder in the chemical potential focusing on recently published analytical arguments in combination with quantum Monte Carlo simulations. Apart from the superfluid and Mott insulator phases that can occur in this system without disorder, disorder allows for an additional phase, called the Bose glass phase. The topology of the phase diagram is subject to strong theorems proving that the Bose Glass phase must intervene between the superfluid and the Mott insulator and implying a Griffiths transition between the Mott insulator and the Bose glass. The full phase diagrams in 3d and 2d are discussed, and we zoom in on the insensitivity of the transition line between the superfluid and the Bose glass in the close vicinity of the tip of the Mott insulator lobe. We briefly comment on the established and remaining questions in the 1d case, and give a short overview of numerical work on related models.
Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.
Groth, Detlef
2017-04-01
Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
NASA Astrophysics Data System (ADS)
Matsumoto, T.
2007-09-01
Monte Carlo simulations are performed to evaluate depth-dose distributions for possible treatment of cancers by boron neutron capture therapy (BNCT). The ICRU computational model of ADAM & EVA was used as a phantom to simulate tumors at a depth of 5 cm in central regions of the lungs, liver and pancreas. Tumors of the prostate and osteosarcoma were also centered at the depth of 4.5 and 2.5 cm in the phantom models. The epithermal neutron beam from a research reactor was the primary neutron source for the MCNP calculation of the depth-dose distributions in those cancer models. For brain tumor irradiations, the whole-body dose was also evaluated. The MCNP simulations suggested that a lethal dose of 50 Gy to the tumors can be achieved without reaching the tolerance dose of 25 Gy to normal tissue. The whole-body phantom calculations also showed that the BNCT could be applied for brain tumors without significant damage to whole-body organs.
3-D Direct Simulation Monte Carlo modeling of comet 67P/Churyumov-Gerasimenko
NASA Astrophysics Data System (ADS)
Liao, Y.; Su, C.; Finklenburg, S.; Rubin, M.; Ip, W.; Keller, H.; Knollenberg, J.; Kührt, E.; Lai, I.; Skorov, Y.; Thomas, N.; Wu, J.; Chen, Y.
2014-07-01
After deep-space hibernation, ESA's Rosetta spacecraft has been successfully woken up and obtained the first images of comet 67P /Churyumov-Gerasimenko (C-G) in March 2014. It is expected that Rosetta will rendezvous with comet 67P and start to observe the nucleus and coma of the comet in the middle of 2014. As the comet approaches the Sun, a significant increase in activity is expected. Our aim is to understand the physical processes in the coma with the help of modeling in order to interpret the resulting measurements and establish observational and data analysis strategies. DSMC (Direct Simulation Monte Carlo) [1] is a very powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow [2,3]. Comparisons between DSMC and fluid techniques have also been performed to establish the limits of these techniques [2,4]. The drawback with 3D DSMC is that it is computationally highly intensive and thus time consuming. However, the performance can be dramatically increased with parallel computing on Graphic Processor Units (GPUs) [5]. We have already studied a case with comet 9P/Tempel 1 where the Deep Impact observations were used to define the shape of the nucleus and the outflow was simulated with the DSMC approach [6,7]. For comet 67P, we intend to determine the gas flow field in the innermost coma and the surface outgassing properties from analyses of the flow field, to investigate dust acceleration by gas drag, and to compare with observations (including time variability). The boundary conditions are implemented with a nucleus shape model [8] and thermal models which are based on the surface heat-balance equation. Several different parameter sets have been investigated. The calculations have been performed using the PDSC^{++} (Parallel Direct Simulation Monte Carlo) code [9] developed by Wu and his coworkers [10-12]. Simulation tasks can be accomplished within 24
Wysong, Ingrid; Gimelshein, Sergey; Bondar, Yevgeniy; Ivanov, Mikhail
2014-04-15
Validation of three direct simulation Monte Carlo chemistry models—total collision energy, Quantum Kinetic, and Kuznetsov state specific (KSS)—is conducted through the comparison of calculated vibrational temperatures of molecular oxygen with measured values inside a normal shock wave. First, the 2D geometry and numerical approach used to simulate the shock experiments is verified. Next, two different vibrational relaxation models are validated by comparison with data for the M = 9.3 case where dissociation is small in the nonequilibrium region of the shock and with newly obtained thermal rates. Finally, the three chemistry model results are compared for M = 9.3 and 13.4 in the region where the vibrational temperature is greatly different from the rotational and translational temperature, and thus nonequilibrium dissociation is important. It is shown that the peak vibrational temperature is very sensitive to the initial nonequilibrium rate of reaction in the chemistry model and that the vibrationally favored KSS model is much closer to the measured peak, but the post-peak behavior indicates that some details of the model still need improvement.
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
Bishop, Martin J; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon "packets" as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct "humped" morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with "virtual-electrode" regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity.
Modeling the tight focusing of beams in absorbing media with Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Brandes, Arnd R.; Elmaklizi, Ahmed; Akarçay, H. Günhan; Kienle, Alwin
2014-11-01
A severe drawback to the scalar Monte Carlo (MC) method is the difficulty of introducing diffraction when simulating light propagation. This hinders, for instance, the accurate modeling of beams focused through microscope objectives, where the diffraction patterns in the focal plane are of great importance in various applications. Here, we propose to overcome this issue by means of a direct extinction method. In the MC simulations, the photon paths' initial positions are sampled from probability distributions which are calculated with a modified angular spectrum of the plane waves technique. We restricted our study to the two-dimensional case, and investigated the feasibility of our approach for absorbing yet nonscattering materials. We simulated the focusing of collimated beams with uniform profiles through microscope objectives. Our results were compared with those yielded by independent simulations using the finite-difference time-domain method. Very good agreement was achieved between the results of both methods, not only for the power distributions around the focal region including diffraction patterns, but also for the distribution of the energy flow (Poynting vector).
Using a direct simulation Monte Carlo approach to model collisions in a buffer gas cell.
Doppelbauer, Maximilian J; Schullian, Otto; Loreau, Jerome; Vaeck, Nathalie; van der Avoird, Ad; Rennick, Christopher J; Softley, Timothy P; Heazlewood, Brianna R
2017-01-28
A direct simulation Monte Carlo (DSMC) method is applied to model collisions between He buffer gas atoms and ammonia molecules within a buffer gas cell. State-to-state cross sections, calculated as a function of the collision energy, enable the inelastic collisions between He and NH3 to be considered explicitly. The inclusion of rotational-state-changing collisions affects the translational temperature of the beam, indicating that elastic and inelastic processes should not be considered in isolation. The properties of the cold molecular beam exiting the cell are examined as a function of the cell parameters and operating conditions; the rotational and translational energy distributions are in accord with experimental measurements. The DSMC calculations show that thermalisation occurs well within the typical 10-20 mm length of many buffer gas cells, suggesting that shorter cells could be employed in many instances-yielding a higher flux of cold molecules.
Using a direct simulation Monte Carlo approach to model collisions in a buffer gas cell
NASA Astrophysics Data System (ADS)
Doppelbauer, Maximilian J.; Schullian, Otto; Loreau, Jerome; Vaeck, Nathalie; van der Avoird, Ad; Rennick, Christopher J.; Softley, Timothy P.; Heazlewood, Brianna R.
2017-01-01
A direct simulation Monte Carlo (DSMC) method is applied to model collisions between He buffer gas atoms and ammonia molecules within a buffer gas cell. State-to-state cross sections, calculated as a function of the collision energy, enable the inelastic collisions between He and NH3 to be considered explicitly. The inclusion of rotational-state-changing collisions affects the translational temperature of the beam, indicating that elastic and inelastic processes should not be considered in isolation. The properties of the cold molecular beam exiting the cell are examined as a function of the cell parameters and operating conditions; the rotational and translational energy distributions are in accord with experimental measurements. The DSMC calculations show that thermalisation occurs well within the typical 10-20 mm length of many buffer gas cells, suggesting that shorter cells could be employed in many instances—yielding a higher flux of cold molecules.
Modeling of vision loss due to vitreous hemorrhage by Monte Carlo simulation.
Al-Saeed, Tarek A; El-Zaiat, Sayed Y
2014-08-01
Vitreous hemorrhage is the leaking of blood into the vitreous humor which results from different diseases. Vitreous hemorrhage leads to vision problems ranging from mild to severe cases in which blindness occurs. Since erythrocytes are the major scatterers in blood, we are modeling light propagation in vitreous humor with erythrocytes randomly distributed in it. We consider the total medium (vitreous humor plus erythrocytes) as a turbid medium and apply Monte Carlo simulation. Then, we calculate the parameters characterizing vision loss due to vitreous hemorrhage. This work shows that the increase of the volume fraction of erythrocytes results in a decrease of the total transmittance of the vitreous body and an increase in the radius of maximum transmittance, the width of the circular strip of bright area, and the radius of the shadow area.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
Pfeiffer, M. Nizenkov, P. Mirza, A. Fasoulas, S.
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
NASA Astrophysics Data System (ADS)
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim
2016-06-01
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ɛ, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.
Kadoura, Ahmad; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim
2016-06-07
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
Parsons, Neal Levin, Deborah A.; Duin, Adri C. T. van; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N{sub 2}({sup 1}Σ{sub g}{sup +})-N{sub 2}({sup 1}Σ{sub g}{sup +}) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
NASA Astrophysics Data System (ADS)
Parsons, Neal; Levin, Deborah A.; van Duin, Adri C. T.; Zhu, Tong
2014-12-01
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(^1Σ _g+)-N2(^1Σ _g+) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
Monte Carlo computer simulations of Venus equilibrium and global resurfacing models
NASA Technical Reports Server (NTRS)
Dawson, D. D.; Strom, R. G.; Schaber, G. G.
1992-01-01
Two models have been proposed for the resurfacing history of Venus: (1) equilibrium resurfacing and (2) global resurfacing. The equilibrium model consists of two cases: in case 1, areas less than or equal to 0.03 percent of the planet are spatially randomly resurfaced at intervals of less than or greater than 150,000 yr to produce the observed spatially random distribution of impact craters and average surface age of about 500 m.y.; and in case 2, areas greater than or equal to 10 percent of the planet are resurfaced at intervals of greater than or equal to 50 m.y. The global resurfacing model proposes that the entire planet was resurfaced about 500 m.y. ago, destroying the preexisting crater population and followed by significantly reduced volcanism and tectonism. The present crater population has accumulated since then with only 4 percent of the observed craters having been embayed by more recent lavas. To test the equilibrium resurfacing model we have run several Monte Carlo computer simulations for the two proposed cases. It is shown that the equilibrium resurfacing model is not a valid model for an explanation of the observed crater population characteristics or Venus' resurfacing history. The global resurfacing model is the most likely explanation for the characteristics of Venus' cratering record. The amount of resurfacing since that event, some 500 m.y. ago, can be estimated by a different type of Monte Carolo simulation. To date, our initial simulation has only considered the easiest case to implement. In this case, the volcanic events are randomly distributed across the entire planet and, therefore, contrary to observation, the flooded craters are also randomly distributed across the planet.
A virtual source model for Monte Carlo simulation of helical tomotherapy.
Yuan, Jiankui; Rong, Yi; Chen, Quan
2015-01-08
The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were
NASA Astrophysics Data System (ADS)
Mok, C. M.; Suribhatla, R. M.; Wanakule, N.; Zhang, M.
2009-12-01
A reliability-based water resources management framework has been developed by AMEC Geomatrix over the last few years to optimally manage a water supply system that serves over two million people in the northern Tampa Bay region in Florida, USA, while protecting wetland health and preventing seawater intrusion. The framework utilizes stochastic optimization techniques to account for uncertainties associated with the prediction of water demand, surface water availability, baseline groundwater levels, a non-anthropogenic reservoir water budget, and hydrological/hydrogeological properties. Except for the hydro¬geological properties, these uncertainties are partially caused by uncertainties in future rainfall patterns in the region. We present here a novel multivariate statistical model of rainfall and a methodology for generating Monte-Carlo realizations based on the statistical model. The model is intended to capture spatial-temporal characteristics of daily rainfall intensity in 172 basins in the northern Tampa Bay region and is characterized by its high dimensionality. Daily rainfall intensity in each basin is expressed as product of a binary random variable (RV) corresponding to the event of rain and a continuous RV representing the amount of rain. For the binary RVs we use a bivariate transformation technique to generate the Monte-Carlo realizations that form the basis for sequential simulation of the continuous RVs. A non-parametric Gaussian copula is used to develop the multivariate model for continuous RVs. This methodology captures key spatial and temporal characteristics of daily rainfall intensities and overcomes numerical issues posed by high-dimensionality of the Gaussian copula.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong Zhao, Xin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-03-07
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (∼2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. A small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. The results from our Monte Carlo simulations are consistent with available experimental results.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; ...
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmore » temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.« less
Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations
Douglass, Michael; Bezak, Eva; Penfold, Scott
2012-06-15
Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun
2016-09-01
The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.
Monte Carlo simulation of domain growth in the kinetic Ising model on the connection machine
NASA Astrophysics Data System (ADS)
Amar, Jacques G.; Sullivan, Francis
1989-10-01
A fast multispin algorithm for the Monte Carlo simulation of the two-dimensional spin-exchange kinetic Ising model, previously described by Sullivan and Mountain and used by Amar et al. has been adapted for use on the Connection Machine and applied as a first test in a calculation of domain growth. Features of the code include: (a) the use of demon bits, (b) the simulation of several runs simultaneously to improve the efficiency of the code, (c) the use of virtual processors to simulate easily and efficiently a larger system size, (d) the use of the (NEWS) grid for last communication between neighbouring processors and updating of boundary layers, (e) the implementation of an efficient random number generator much faster than that provided by Thinking Machines Corp., and (f) the use of the LISP function "funcall" to select which processors to update. Overall speed of the code when run on a (128x128) processor machine is about 130 million attempted spin-exchanges per second, about 9 times faster than the comparable code, using hardware vectorised-logic operations and 64-bit multispin coding on the Cyber 205. The same code can be used on a larger machine (65 536 processors) and should produce speeds in excess of 500 million attempted spin-exchanges per second.
Modeling of composite latex particle morphology by off-lattice Monte Carlo simulation.
Duda, Yurko; Vázquez, Flavio
2005-02-01
Composite latex particles have shown a great range of applications such as paint resins, varnishes, water borne adhesives, impact modifiers, etc. The high-performance properties of this kind of materials may be explained in terms of a synergistical combination of two different polymers (usually a rubber and a thermoplastic). A great variety of composite latex particles with very different morphologies may be obtained by two-step emulsion polymerization processes. The formation of specific particle morphology depends on the chemical and physical nature of the monomers used during the synthesis, the process temperature, the reaction initiator, the surfactants, etc. Only a few models have been proposed to explain the appearance of the composite particle morphologies. These models have been based on the change of the interfacial energies during the synthesis. In this work, we present a new three-component model: Polymer blend (flexible and rigid chain particles) is dispersed in water by forming spherical cavities. Monte Carlo simulations of the model in two dimensions are used to determine the density distribution of chains and water molecules inside the suspended particle. This approach allows us to study the dependence of the morphology of the composite latex particles on the relative hydrophilicity and flexibility of the chain molecules as well as on their density and composition. It has been shown that our simple model is capable of reproducing the main features of the various morphologies observed in synthesis experiments.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Antonelli, Maria-Rosaria; Pierangelo, Angelo; Novikova, Tatiana; Validire, Pierre; Benali, Abdelali; Gayet, Brice; De Martino, Antonello
2011-01-01
Polarimetric imaging is emerging as a viable technique for tumor detection and staging. As a preliminary step towards a thorough understanding of the observed contrasts, we present a set of numerical Monte Carlo simulations of the polarimetric response of multilayer structures representing colon samples in the backscattering geometry. In a first instance, a typical colon sample was modeled as one or two scattering “slabs” with monodisperse non absorbing scatterers representing the most superficial tissue layers (the mucosa and submucosa), above a totally depolarizing Lambertian lumping the contributions of the deeper layers (muscularis and pericolic tissue). The model parameters were the number of layers, their thicknesses and morphology, the sizes and concentrations of the scatterers, the optical index contrast between the scatterers and the surrounding medium, and the Lambertian albedo. With quite similar results for single and double layer structures, this model does not reproduce the experimentally observed stability of the relative magnitudes of the depolarizing powers for incident linear and circular polarizations. This issue was solved by considering bimodal populations including large and small scatterers in a single layer above the Lambertian, a result which shows the importance of taking into account the various types of scatterers (nuclei, collagen fibers and organelles) in the same model. PMID:21750762
Walker, Jeffrey A
2016-01-01
Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O'Brien's OLS test, Anderson's permutation [Formula: see text]-test, two permutation F-tests (including GlobalAncova), and a rotation z-test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS distributions suggest that the GLS results in downward
2016-01-01
Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${r}_{F}^{2}$\\end{document}rF2-test, two permutation F-tests (including GlobalAncova), and a rotation z-test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data
NASA Astrophysics Data System (ADS)
Pusztai, Läszlö
1991-02-01
The use of Reverse Monte Carlo simulation, a novel method of structural modelling, looks very promising for the case of metallic glasses. In this paper initial results are shown for glassy Ni2B, using experimental radial distribution functions as input information.
NASA Astrophysics Data System (ADS)
Males, Richard M.; Melby, Jeffrey A.
2011-12-01
The US Army Corps of Engineers has a mission to conduct a wide array of programs in the arenas of water resources, including coastal protection. Coastal projects must be evaluated according to sound economic principles, and considerations of risk assessment and sea level change must be included in the analysis. Breakwaters are typically nearshore structures designed to reduce wave action in the lee of the structure, resulting in calmer waters within the protected area, with attendant benefits in terms of usability by navigation interests, shoreline protection, reduction of wave runup and onshore flooding, and protection of navigation channels from sedimentation and wave action. A common method of breakwater construction is the rubble mound breakwater, constructed in a trapezoidal cross section with gradually increasing stone sizes from the core out. Rubble mound breakwaters are subject to degradation from storms, particularly for antiquated designs with under-sized stones insufficient to protect against intense wave energy. Storm waves dislodge the stones, resulting in lowering of crest height and associated protective capability for wave reduction. This behavior happens over a long period of time, so a lifecycle model (that can analyze the damage progression over a period of years) is appropriate. Because storms are highly variable, a model that can support risk analysis is also needed. Economic impacts are determined by the nature of the wave climate in the protected area, and by the nature of the protected assets. Monte Carlo simulation (MCS) modeling that incorporates engineering and economic impacts is a worthwhile method for handling the many complexities involved in real world problems. The Corps has developed and utilized a number of MCS models to compare project alternatives in terms of their costs and benefits. This paper describes one such model, Coastal Structure simulation (CSsim) that has been developed specifically for planning level analysis of
Monte Carlo simulation of x-ray scatter based on patient model from digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Liu, Bob; Wu, Tao; Moore, Richard H.; Kopans, Daniel B.
2006-03-01
We are developing a breast specific scatter correction method for digital beast tomosynthesis (DBT). The 3D breast volume was initially reconstructed from 15 projection images acquired from a GE prototype tomosynthesis system without correction of scatter. The voxel values were mapped to the tissue compositions using various segmentation schemes. This voxelized digital breast model was entered into a Monte Carlo package simulating the prototype tomosynthesis system. One billion photons were generated from the x-ray source for each projection in the simulation and images of scattered photons were obtained. A primary only projection image was then produced by subtracting the scatter image from the corresponding original projection image which contains contributions from the both primary photons and scatter photons. The scatter free projection images were then used to reconstruct the 3D breast using the same algorithm. Compared with the uncorrected 3D image, the x-ray attenuation coefficients represented by the scatter-corrected 3D image are closer to those derived from the measurement data.
Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics
NASA Astrophysics Data System (ADS)
Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou
Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.
Monte Carlo simulation of flexible trimers: from square well chains to amphiphilic primitive models.
Jiménez-Serratos, Guadalupe; Gil-Villegas, Alejandro; Vega, Carlos; Blas, Felipe J
2013-09-21
In this work, we present Monte Carlo computer simulation results of a primitive model of self-assembling system based on a flexible 3-mer chain interacting via square-well interactions. The effect of switching off the attractive interaction in an extreme sphere is analyzed, since the anisotropy in the molecular potential promotes self-organization. Before addressing studies on self-organization it is necessary to know the vapor liquid equilibrium of the system to avoid to confuse self-organization with phase separation. The range of the attractive potential of the model, λ, is kept constant and equal to 1.5σ, where σ is the diameter of a monomer sphere, while the attractive interaction in one of the monomers was gradually turned off until a pure hard body interaction was obtained. We present the vapor-liquid coexistence curves for the different models studied, their critical properties, and the comparison with the SAFT-VR theory prediction [A. Gil-Villegas, A. Galindo, P. J. Whitehead, S. J. Mills, G. Jackson, and A. N. Burgess, J. Chem. Phys. 106, 4168 (1997)]. Evidence of self-assembly for this system is discussed.
Monte Carlo simulation of flexible trimers: From square well chains to amphiphilic primitive models
NASA Astrophysics Data System (ADS)
Jiménez-Serratos, Guadalupe; Gil-Villegas, Alejandro; Vega, Carlos; Blas, Felipe J.
2013-09-01
In this work, we present Monte Carlo computer simulation results of a primitive model of self-assembling system based on a flexible 3-mer chain interacting via square-well interactions. The effect of switching off the attractive interaction in an extreme sphere is analyzed, since the anisotropy in the molecular potential promotes self-organization. Before addressing studies on self-organization it is necessary to know the vapor liquid equilibrium of the system to avoid to confuse self-organization with phase separation. The range of the attractive potential of the model, λ, is kept constant and equal to 1.5σ, where σ is the diameter of a monomer sphere, while the attractive interaction in one of the monomers was gradually turned off until a pure hard body interaction was obtained. We present the vapor-liquid coexistence curves for the different models studied, their critical properties, and the comparison with the SAFT-VR theory prediction [A. Gil-Villegas, A. Galindo, P. J. Whitehead, S. J. Mills, G. Jackson, and A. N. Burgess, J. Chem. Phys. 106, 4168 (1997)]. Evidence of self-assembly for this system is discussed.
Canopy polarized BRDF simulation based on non-stationary Monte Carlo 3-D vector RT modeling
NASA Astrophysics Data System (ADS)
Kallel, Abdelaziz; Gastellu-Etchegorry, Jean Philippe
2017-03-01
Vector radiative transfer (VRT) has been largely used to simulate polarized reflectance of atmosphere and ocean. However it is still not properly used to describe vegetation cover polarized reflectance. In this study, we try to propose a 3-D VRT model based on a modified Monte Carlo (MC) forward ray tracing simulation to analyze vegetation canopy reflectance. Two kinds of leaf scattering are taken into account: (i) Lambertian diffuse reflectance and transmittance and (ii) specular reflection. A new method to estimate the condition on leaf orientation to produce reflection is proposed, and its probability to occur, Pl,max, is computed. It is then shown that Pl,max is low, but when reflection happens, the corresponding radiance Stokes vector, Io, is very high. Such a phenomenon dramatically increases the MC variance and yields to an irregular reflectance distribution function. For better regularization, we propose a non-stationary MC approach that simulates reflection for each sunny leaf assuming that its orientation is randomly chosen according to its angular distribution. It is shown in this case that the average canopy reflection is proportional to Pl,max ·Io which produces a smooth distribution. Two experiments are conducted: (i) assuming leaf light polarization is only due to the Fresnel reflection and (ii) the general polarization case. In the former experiment, our results confirm that in the forward direction, canopy polarizes horizontally light. In addition, they show that in inclined forward direction, diagonal polarization can be observed. In the latter experiment, polarization is produced in all orientations. It is particularly pointed out that specular polarization explains just a part of the forward polarization. Diffuse scattering polarizes light horizontally and vertically in forward and backward directions, respectively. Weak circular polarization signal is also observed near the backscattering direction. Finally, validation of the non
NASA Astrophysics Data System (ADS)
Gereben, Orsolya; Pusztai, László
2013-10-01
The liquid structure of tetrachloroethene has been investigated on the basis of measured neutron and X-ray scattering structure factors, applying molecular dynamics simulations and reverse Monte Carlo (RMC) modeling with flexible molecules and interatomic potentials. As no complete all-atom force field parameter set could be found for this planar molecule, the closest matching all-atom Optimized Potentials for Liquid Simulations (OPLS-AA) intra-molecular parameter set was improved by equilibrium bond length and angle parameters coming from electron diffraction experiments [I. L. Karle and J. Karle, J. Chem. Phys. 20, 63 (1952)]. In addition, four different intra-molecular charge distribution sets were tried, so in total, eight different molecular dynamics simulations were performed. The best parameter set was selected by calculating the mean square difference between the calculated total structure factors and the corresponding experimental data. The best parameter set proved to be the one that uses the electron diffraction based intra-molecular parameters and the charges qC = 0.1 and qCl = -0.05. The structure was further successfully refined by applying RMC computer modeling with flexible molecules that were kept together by interatomic potentials. Correlation functions concerning the orientation of molecular axes and planes were also determined. They reveal that the molecules closest to each other exclusively prefer the parallel orientation of both the molecular axes and planes. Molecules forming the first maximum of the center-center distribution have a preference for <30° and >60° axis orientation and >60° molecular plane arrangement. A second coordination sphere at ˜11 Å and a very small third one at ˜16 Å can be found as well, without preference for any axis or plane orientation.
Microsommite: crystal chemistry, phase transitions, Ising model and Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bonaccorsi, E.; Merlino, S.; Pasero, M.; Macedonio, G.
Microsommite, ideal formula [Na4K2(SO4)] [Ca2Cl2][Si6Al6O24], is a rare feldspathoid that occurs in volcanic products of Vesuvius. It belongs to the cancrinite-davyne group of minerals, presenting an ABAB... stacking sequence of layers that contain six-membered rings of tetrahedra, with Si and Al cations regularly alternating in the tetrahedral sites. The structure was refined in space group P63 to R=0.053 by means of single-crystal X-ray diffraction data. The cell parameters are a=22.161Å=√3adav, c=5.358Å=cdav Z=3. The superstructure arises due to the long-range ordering of extra-framework ions within the channels of the structure. This ordering progressively decreases with rising temperature until it is completely lost and microsommite transforms into davyne. The order-disorder transformation has been monitored in several crystals by means of X-ray superstructure reflections and the critical parameters Tc 750°C and β 0.12 were obtained. The kinetics of the ordering process were followed at different temperatures and the activation energy was determined to be about 125kJmol-1. The continuous order-disorder phase transition in microsommite has been discussed on the basis of a two-dimensional Ising model in a triangular lattice with nn (nearest neighbours) and nnn (next-nearest neighbours) interactions. Such a model was simulated using a Monte Carlo technique. The theoretical model well matches the experimental data; two phase transitions were indicated by the simulated runs: at low temperature only one of the three sublattices begins to disorder, whereas the second transition involves all three sublattices.
Modeling of multi-band drift in nanowires using a full band Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Hathwar, Raghuraj; Saraniti, Marco; Goodnick, Stephen M.
2016-07-01
We report on a new numerical approach for multi-band drift within the context of full band Monte Carlo (FBMC) simulation and apply this to Si and InAs nanowires. The approach is based on the solution of the Krieger and Iafrate (KI) equations [J. B. Krieger and G. J. Iafrate, Phys. Rev. B 33, 5494 (1986)], which gives the probability of carriers undergoing interband transitions subject to an applied electric field. The KI equations are based on the solution of the time-dependent Schrödinger equation, and previous solutions of these equations have used Runge-Kutta (RK) methods to numerically solve the KI equations. This approach made the solution of the KI equations numerically expensive and was therefore only applied to a small part of the Brillouin zone (BZ). Here we discuss an alternate approach to the solution of the KI equations using the Magnus expansion (also known as "exponential perturbation theory"). This method is more accurate than the RK method as the solution lies on the exponential map and shares important qualitative properties with the exact solution such as the preservation of the unitary character of the time evolution operator. The solution of the KI equations is then incorporated through a modified FBMC free-flight drift routine and applied throughout the nanowire BZ. The importance of the multi-band drift model is then demonstrated for the case of Si and InAs nanowires by simulating a uniform field FBMC and analyzing the average carrier energies and carrier populations under high electric fields. Numerical simulations show that the average energy of the carriers under high electric field is significantly higher when multi-band drift is taken into consideration, due to the interband transitions allowing carriers to achieve higher energies.
Quasi-ballistic light scattering - analytical models versus Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Turcu, Ioan; Kirillin, Mikhail
2009-08-01
Approximate analytical solutions for the light scattering in a plan parallel geometry, where each scattering behaves according to a Henyey-Greenstein (HG) phase function, are presented and compared with Monte Carlo simulations. Analyzing each nth order scattered flux, the obtained angular spreading is very well described also by a HG phase function. However, the total scattered flux deviates from the HG type dependence revealing the limits of the approximations.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.
ZARE SAKHVIDI, Mohammad Javad; BARKHORDARI, Abolfazl; SALEHI, Maryam; BEHDAD, Shekoofeh; FALLAHZADEH, Hossein
2013-01-01
Applicability of two mathematical models in inhalation exposure prediction (well mixed room and near field-far field model) were validated against standard sampling method in one operation room for isoflurane. Ninety six air samples were collected from near and far field of the room and quantified by gas chromatography-flame ionization detector. Isoflurane concentration was also predicted by the models. Monte Carlo simulation was used to incorporate the role of parameters variability. The models relatively gave more conservative results than the measurements. There was no significant difference between the models and direct measurements results. There was no difference between the concentration prediction of well mixed room model and near field far field model. It suggests that the dispersion regime in room was close to well mixed situation. Direct sampling showed that the exposure in the same room for same type of operation could be up to 17 times variable which can be incorporated by Monte Carlo simulation. Mathematical models are valuable option for prediction of exposure in operation rooms. Our results also suggest that incorporating the role of parameters variability by conducting Monte Carlo simulation can enhance the strength of prediction in occupational hygiene decision making. PMID:23912206
Cuplov, Vesna; Buvat, Iréne; Pain, Frédéric; Jan, Sébastien
2014-02-01
The Geant4 Application for Emission Tomography (GATE) is an advanced open-source software dedicated to Monte-Carlo (MC) simulations in medical imaging involving photon transportation (Positron emission tomography, single photon emission computed tomography, computed tomography) and in particle therapy. In this work, we extend the GATE to support simulations of optical imaging, such as bioluminescence or fluorescence imaging, and validate it against the MC for multilayered media standard simulation tool for biomedical optics in simple geometries. A full simulation set-up for molecular optical imaging (bioluminescence and fluorescence) is implemented in GATE, and images of the light distribution emitted from a phantom demonstrate the relevance of using GATE for optical imaging simulations.
Biscay, F; Ghoufi, A; Goujon, F; Lachet, V; Malfreyt, P
2008-11-06
The anisotropic united atoms (AUA4) model has been used for linear and branched alkanes to predict the surface tension as a function of temperature by Monte Carlo simulations. Simulations are carried out for n-alkanes ( n-C5, n-C6, n-C7, and n-C10) and for two branched C7 isomers (2,3-dimethylpentane and 2,4-dimethylpentane). Different operational expressions of the surface tension using both the thermodynamic and the mechanical definitions have been applied. The simulated surface tensions with the AUA4 model are found to be consistent within both definitions and in good agreement with experiments.
NASA Astrophysics Data System (ADS)
Wang, Yinglong; Qin, Aili; Chu, Lizhi; Deng, Zechao; Ding, Xuecheng; Guan, Li
2017-02-01
We simulated the nucleation and growth of Si nanoparticles produced by pulse laser deposition using Monte Carlo method at the molecular (microscopic) level. In the model, the mechanism and thermodynamic conditions of nucleation and growth of Si nanoparticles were described. In a real physical scale of target-substrate configuration, the model was used to analyze the average size distribution of Si nanoparticles in argon ambient gas and the calculated results are in agreement with the experimental results.
Baek, I-H; Lee, B-Y; Kang, J; Kwon, K-I
2015-04-01
Ondansetron is a potent antiemetic drug that has been commonly used to treat acute and chemotherapy-induced nausea and vomiting (CINV) in dogs. The aim of this study was to perform a pharmacokinetic analysis of ondansetron in dogs following oral administration of a single dose. A single 8-mg oral dose of ondansetron (Zofran(®) ) was administered to beagles (n = 18), and the plasma concentrations of ondansetron were measured by liquid chromatography-tandem mass spectrometry. The data were analyzed by modeling approaches using ADAPT5, and model discrimination was determined by the likelihood-ratio test. The peak plasma concentration (Cmax ) was 11.5 ± 10.0 ng/mL at 1.1 ± 0.8 h. The area under the plasma concentration vs. time curve from time zero to the last measurable concentration was 15.9 ± 14.7 ng·h/mL, and the half-life calculated from the terminal phase was 1.3 ± 0.7 h. The interindividual variability of the pharmacokinetic parameters was high (coefficient of variation > 44.1%), and the one-compartment model described the pharmacokinetics of ondansetron well. The estimated plasma concentration range of the usual empirical dose from the Monte Carlo simulation was 0.1-13.2 ng/mL. These findings will facilitate determination of the optimal dose regimen for dogs with CINV.
Zhu, Caigang; Liu, Quan
2012-01-01
We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.
A bone composition model for Monte Carlo x-ray transport simulations
Zhou Hu; Keall, Paul J.; Graves, Edward E.
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
NASA Astrophysics Data System (ADS)
Nourazar, S. S.; Jahangiri, P.; Aboutalebi, A.; Ganjaei, A. A.; Nourazar, M.; Khadem, J.
2011-06-01
The effect of new terms in the improved algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, is investigated by simulating a rarefied binary gas mixture flow inside a rotating cylinder. Dalton law for the partial pressures contributed by each species of the binary gas mixture is incorporated into our simulation using the MDSMC method and the direct simulation Monte-Carlo (DSMC) method. Moreover, the effect of the exponent of the cosine of deflection angle (α) in the inter-molecular collision models, the variable soft sphere (VSS) and the variable hard sphere (VHS), is investigated in our simulation. The improvement of the results of simulation is pronounced using the MDSMC method when compared with the results of the DSMC method. The results of simulation using the VSS model show some improvements on the result of simulation for the mixture temperature at radial distances close to the cylinder wall where the temperature reaches the maximum value when compared with the results using the VHS model.
NASA Astrophysics Data System (ADS)
Komura, Yukihiro; Okabe, Yutaka
2016-04-01
We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.
Assessment of mean annual flood damage using simple hydraulic modeling and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Oubennaceur, K.; Agili, H.; Chokmani, K.; Poulin, J.; Marceau, P.
2016-12-01
Floods are the most frequent and the most damaging natural disaster in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which struck the region in 2011, causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects has been developed. This approach integrates three main components: (1) hydrologic modelling aiming to establish a probability-discharge function which associate each measured discharge to its probability of occurrence (2) hydraulic modeling that aims to establish the relationship between the discharge and the water stage at each building (3) damage study that aims to assess the buildings damage using damage functions. The damage is estimated according to the water depth defined as the difference between the water level and the elevation of the building's first floor. The application of the proposed approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for authorities to support their decisions on risk management and prevention against this disaster.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
Wako, H
1989-12-01
Monte Carlo simulations of a small protein, crambin, were carried out with and without hydration energy. The methodology presented here is characterized, as compared with the other similar simulations of proteins in solution, by two points: (1) protein conformations are treated in fixed geometry so that dihedral angles are independent variables rather than cartesian coordinates of atoms; and (2) instead of treating water molecules explicitly in the calculation, hydration energy is incorporated in the conformational energy function in the form of sigma giAi, where Ai is the accessible surface area of an atomic group i in a given conformation, and gi is the free energy of hydration per unit surface area of the atomic group (i.e., hydration-shell model). Reality of this model was tested by carrying out Monte Carlo simulations for the two kinds of starting conformations, native and unfolded ones, and in the two kinds of systems, in vacuo and solution. In the simulations starting from the native conformation, the differences between the mean properties in vacuo and solution simulations are not very large, but their fluctuations around the mean conformation during the simulation are relatively smaller in solution than in vacuo. On the other hand, in the simulations starting from the unfolded conformation, the molecule fluctuates much more largely in solution than in vacuo, and the effects of taking into account the hydration energy are pronounced very much. The results suggest that the method presented in this paper is useful for the simulations of proteins in solution.
Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos
2011-07-15
For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.
Huang, Chen-Hsi; Marian, Jaime
2016-10-26
We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term 'ABVI', incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found.
NASA Astrophysics Data System (ADS)
Huang, Chen-Hsi; Marian, Jaime
2016-10-01
We derive an Ising Hamiltonian for kinetic simulations involving interstitial and vacancy defects in binary alloys. Our model, which we term ‘ABVI’, incorporates solute transport by both interstitial defects and vacancies into a mathematically-consistent framework, and thus represents a generalization to the widely-used ABV model for alloy evolution simulations. The Hamiltonian captures the three possible interstitial configurations in a binary alloy: A-A, A-B, and B-B, which makes it particularly useful for irradiation damage simulations. All the constants of the Hamiltonian are expressed in terms of bond energies that can be computed using first-principles calculations. We implement our ABVI model in kinetic Monte Carlo simulations and perform a verification exercise by comparing our results to published irradiation damage simulations in simple binary systems with Frenkel pair defect production and several microstructural scenarios, with matching agreement found.
Dodds, Michael G; Vicini, Paolo
2004-09-01
Advances in computer hardware and the associated computer-intensive algorithms made feasible by these advances [like Markov chain Monte Carlo (MCMC) data analysis techniques] have made possible the application of hierarchical full Bayesian methods in analyzing pharmacokinetic and pharmacodynamic (PK-PD) data sets that are multivariate in nature. Pharmacokinetic data analysis in particular has been one area that has seized upon this technology to refine estimates of drug parameters from sparse data gathered in a large, highly variable population of patients. A drawback in this type of analysis is that it is difficult to quantitatively assess convergence of the Markov chains to a target distribution, and thus, it is sometimes difficult to assess the reliability of estimates gained from this procedure. Another complicating factor is that, although the application of MCMC methods to population PK-PD problems has been facilitated by new software designed for the PK-PD domain (specifically PKBUGS), experts in PK-PD may not have the necessary experience with MCMC methods to detect and understand problems with model convergence. The objective of this work is to provide an example of a set of diagnostics useful to investigators, by analyzing in detail three convergence criteria (namely the Raftery and Lewis, Geweke, and Heidelberger and Welch methods) on a simulated problem and with a rule of thumb of 10,000 chain elements in the Markov chain. We used two publicly available software packages to assess convergence of MCMC parameter estimates; the first performs Bayesian parameter estimation (PKBUGS/WinBUGS), and the second is focused on posterior analysis of estimates (BOA). The main message that seems to emerge is that accurately estimating confidence regions for the parameters of interest is more demanding than estimating the parameter means. Together, these tools provide numerical means by which an investigator can establish confidence in convergence and thus in the
Schaefer, C.; Jansen, A. P. J.
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
NASA Astrophysics Data System (ADS)
Schaefer, C.; Jansen, A. P. J.
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
Schaefer, C; Jansen, A P J
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
D. L. Kelly
2007-06-01
Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.
Buyukada, Musa
2016-09-01
Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process.
McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan
2013-01-01
Purpose: The purpose of this study is to adapt an equivalent source model originally developed for conventional CT Monte Carlo dose quantification to the radiation oncology context and validate its application for evaluating concomitant dose incurred by a kilovoltage (kV) cone-beam CT (CBCT) system integrated into a linear accelerator. Methods: In order to properly characterize beams from the integrated kV CBCT system, the authors have adapted a previously developed equivalent source model consisting of an equivalent spectrum module that takes into account intrinsic filtration and an equivalent filter module characterizing the added bowtie filtration. An equivalent spectrum was generated for an 80, 100, and 125 kVp beam with beam energy characterized by half-value layer measurements. An equivalent filter description was generated from bowtie profile measurements for both the full- and half-bowtie. Equivalent source models for each combination of equivalent spectrum and filter were incorporated into the Monte Carlo software package MCNPX. Monte Carlo simulations were then validated against in-phantom measurements for both the radiographic and CBCT mode of operation of the kV CBCT system. Radiographic and CBCT imaging dose was measured for a variety of protocols at various locations within a body (32 cm in diameter) and head (16 cm in diameter) CTDI phantom. The in-phantom radiographic and CBCT dose was simulated at all measurement locations and converted to absolute dose using normalization factors calculated from air scan measurements and corresponding simulations. The simulated results were compared with the physical measurements and their discrepancies were assessed quantitatively. Results: Strong agreement was observed between in-phantom simulations and measurements. For the radiographic protocols, simulations uniformly underestimated measurements by 0.54%–5.14% (mean difference = −3.07%, SD = 1.60%). For the CBCT protocols, simulations uniformly
[A study of brain inner tissue water molecule self-diffusion model based on Monte Carlo simulation].
Wu, Zhanxiong; Zhu, Shanan; Bin, He
2010-06-01
The study of water molecule self-diffusion process is of importance not only for getting anatomical information of brain inner tissue, but also for shedding light on the diffusion process of some medicine in brain tissue. In this paper, we summarized the self-diffusion model of water molecule in brain inner tissue, and calculated the self-diffusion coefficient based on Monte Carlo simulation under different conditions. The comparison between this result and that of Latour model showed that the two self-diffusion coefficients were getting closer when the diffusion time became longer, and that the Latour model was a long time-depended self-diffusion model.
NASA Astrophysics Data System (ADS)
Fernández-Varea, José M.
1998-09-01
The algorithms implemented in the Monte Carlo codes LEEPS and PENELOPE for the simulation of the inelastic scattering of electrons and positrons are described. Both algorithms are based on the first Born approximation, in which the inelastic cross section is proportional to the generalized oscillator strength. This quantity is obtained by extrapolating the optical oscillator strength into the non-zero momentum transfer region using suitable extension algorithms. The calculated inelastic mean free paths and stopping powers are compared to other theoretical and experimental data available from the literature. The stability of PENELOPE's mixed simulation procedure under variations of the cutoff energy, which separates hard from soft collisions, is also analyzed. Finally, angular deflections of the projectile in inelastic collisions are considered.
Sprandel, Kelly A; Drusano, George L; Hecht, David W; Rotschafer, John C; Danziger, Larry H; Rodvold, Keith A
2006-08-01
Population pharmacokinetic modeling and Monte Carlo simulation (MCS) are approaches used to determine probability of target attainment (PTA) of antimicrobial therapy. The objectives of this study were 1) to determine a population pharmacokinetic model (PPM) using metronidazole and hydroxy-metronidazole concentrations from healthy subjects and critically ill patients, and 2) to determine the probability of attaining the pharmacodynamic target area under the plasma concentration (AUC)/MIC ratio >or=70 against 218 clinical isolates of Bacteroides fragilis using MCS. Eighteen healthy subjects were randomized to 3 dosages of intravenous metronidazole (500 mg every 8 h, 1000 mg day(-1), 1500 mg day(-1)) in an open-label 3-way crossover fashion. Serial blood samples were collected over 25.5 h on the 3rd day of each study period. An additional of 8 critically ill patients received intravenous metronidazole 500 mg every 8 h. Serial blood samples were collected over 8 h after the 2nd day of dosing. Plasma metronidazole and hydroxy-metronidazole concentrations were analyzed using a high-performance liquid chromatographic assay. The 834 plasma concentrations from 62 data sets were simultaneously modeled with Non-Parametric Adaptive Grid population modeling program. A 4-compartment model with a metabolite and zero-order infusion into the central compartment was used. The mean parameter vector and covariance matrix from PPM were inserted into the simulation module of ADAPT II. A 10,000-subject MCS was performed to determine the probability of PTA for a total drug AUC to MIC ratio >or=70 against 218 isolates of B. fragilis (MIC range, 0.125-2.0 mg L(-1)). Mean parameter values were CL(non-OH), 3.08 L h(-1); Vc, 35.4 L; K(OH), 0.04 h(-1); CL(OH), 2.78 L h(-1); and V(OH), 9.66 L. The regression values of the observed versus predicted concentrations (r2) of metronidazole and hydroxy-metronidazole were 0.972 and 0.980, respectively. The PTA for metronidazole 1500 mg day(-1) or 500 mg
Search of New Higgs Boson in B-L Model at the LHC Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Mansour, H. M. M.; Bakhet, Nady
The aim of this work is to search for a new heavy Higgs boson in the B-L extension of the Standard Model at LHC using the data produced from simulated collisions between two protons at different center of mass energies by Monte Carlo event generator programs to find new Higgs boson signatures at the LHC. Also we study the production and decay channels for Higgs boson in this model and its interactions with the other new particles of this model namely the new neutral gauge massive boson and the new fermionic right-handed heavy neutrinos .
NASA Astrophysics Data System (ADS)
Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei
2014-09-01
Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
Models for direct Monte Carlo simulation of coupled vibration-dissociation
NASA Technical Reports Server (NTRS)
Haas, Brian L.; Boyd, Iain D.
1993-01-01
A new model for reactive collisions is developed within the framework of a particle method, which simulates coupled vibration-dissociation (CVD) behavior in high-temperature gases. The fundamental principles of particle simulation methods are introduced with particular attention given to the probability functions employed to select thermal and reactive collisions. Reaction probability functions are derived which favor vibrationally excited molecules as reaction candidates. The new models derived here are used to simulate CVD behavior during thermochemical relaxation of constant-volume O2 reservoirs, as well as the dissociation incubation behavior of postshock N2 flows for comparisons with previous models and experimental data.
Quantum Monte Carlo simulations of the one-dimensional extended Hubbard model
Somsky, W.R.; Gubernatis, J.E.
1989-01-01
We report preliminary results of an investigation of the thermodynamic properties of the extended Hubbard model in one- dimension, calculated with the world-line Monte Carlo method described by Hirsch et al. With strictly continuous world-lines, we are able to measure the expectation of operators that conserve fermion number locally, such as the energy and (spatial) occupation number. By permitting the world-lines to be broken'' stochastically, we may also measure the expectation of operators that conserve fermion number only globally, such as the single-particle Green's function. For a 32 site lattice we present preliminary calculations of the average electron occupancy as a function of wavenumber when U = 4, V = 0 and {beta} = 16. For a half-filled band we find no indications of a Fermi surface. Slightly away from half-filling, we find Fermi-surface-like behavior similar to that found in other numerical investigations. 8 refs., 3 figs.
NASA Astrophysics Data System (ADS)
Almarza, N. G.; PÈ©kalski, J.; Ciach, A.
2014-04-01
The triangular lattice model with nearest-neighbor attraction and third-neighbor repulsion, introduced by Pȩkalski, Ciach, and Almarza [J. Chem. Phys. 140, 114701 (2014)] is studied by Monte Carlo simulation. Introduction of appropriate order parameters allowed us to construct a phase diagram, where different phases with patterns made of clusters, bubbles or stripes are thermodynamically stable. We observe, in particular, two distinct lamellar phases—the less ordered one with global orientational order and the more ordered one with both orientational and translational order. Our results concern spontaneous pattern formation on solid surfaces, fluid interfaces or membranes that is driven by competing interactions between adsorbing particles or molecules.
Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
Kusy, Kevin; Ford, Roseanne M
2007-09-15
Motile bacteria accumulated at the interface between an aqueous solution and a polymer gel suspension. The gel suspension was produced using Gelrite and contained 50-500 microm semisolid gel particulates in aqueous buffer. Smooth-swimming (HCB437) and wild-type (HCB1) Escherchia coli displayed normal swimming behaviors in the aqueous buffer but exhibited no translational motion when obstructed by the semisolid particulates of the gel suspension. Translational motion immediately resumed after the bacteria reoriented in a direction away from the particle surfaces. These observations were incorporated into Monte Carlo simulations that linked individual swimming properties to macroscopic bacterial distributions. The simulations suggested that the apparent surface area of the porous media influenced the degree of bacteria/surface interactions and thatthe mechanism of surface association could concentrate bacterial populations based upon the physical constraints of the porous media system. Population distributions from the Monte Carlo simulations matched a 1-D transport model that characterized the bacteria/surface interactions as an adsorption-like process even though direct observations suggested no physical attachment was occurring. Consequently, the 1-D transport model provided a semiquantitative approach to approximate bacterial migrations within porous media systems. Results suggest that the self-propulsive nature of bacteria can produce nondiffusive migration patterns within high-surface area environments.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
NASA Astrophysics Data System (ADS)
Rosario, Dalton S.
2001-08-01
Higher-level decisions for AiTR (aided target recognition) networks have been made so far in our community in an ad-hoc fashion. Higher level decisions in this context do not involve target recognition performance per se, but other inherent output measures of performance, e.g., expected response time, long-term electronic memory required to achieve a tolerable level of image losses. Those measures usually require the knowledge associated with the steady-state, stochastic behavior of the entire network, which in practice is mathematically intractable. Decisions requiring those and similar output measures will become very important as AiTR networks are permanently deployed to the field. To address this concern, I propose to model AiTR systems as an open stochastic-process network and to conduct Monte Carlo simulations based on this model to estimate steady state performances. To illustrate this method, I modeled as proposed a familiar operational scenario and an existing baseline AiTR system. Details of the stochastic model and its corresponding Monte-Carlo simulation results are discussed in the paper.
Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
NASA Astrophysics Data System (ADS)
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
NASA Astrophysics Data System (ADS)
Castells, Victoria; Van Tassel, Paul R.
2005-02-01
Proteins often undergo changes in internal conformation upon interacting with a surface. We investigate the thermodynamics of surface induced conformational change in a lattice model protein using a multicanonical Monte Carlo method. The protein is a linear heteropolymer of 27 segments (of types A and B) confined to a cubic lattice. The segmental order and nearest neighbor contact energies are chosen to yield, in the absence of an adsorbing surface, a unique 3×3×3 folded structure. The surface is a plane of sites interacting either equally with A and B segments (equal affinity surface) or more strongly with the A segments (A affinity surface). We use a multicanonical Monte Carlo algorithm, with configuration bias and jump walking moves, featuring an iteratively updated sampling function that converges to the reciprocal of the density of states 1/Ω(E), E being the potential energy. We find inflection points in the configurational entropy, S(E)=klnΩ(E), for all but a strongly adsorbing equal affinity surface, indicating the presence of free energy barriers to transition. When protein-surface interactions are weak, the free energy profiles F(E)=E-TS(E) qualitatively resemble those of a protein in the absence of a surface: a free energy barrier separates a folded, lowest energy state from globular, higher energy states. The surface acts in this case to stabilize the globular states relative to the folded state. When the protein surface interactions are stronger, the situation differs markedly: the folded state no longer occurs at the lowest energy and free energy barriers may be absent altogether.
Castells, Victoria; Van Tassel, Paul R
2005-02-22
Proteins often undergo changes in internal conformation upon interacting with a surface. We investigate the thermodynamics of surface induced conformational change in a lattice model protein using a multicanonical Monte Carlo method. The protein is a linear heteropolymer of 27 segments (of types A and B) confined to a cubic lattice. The segmental order and nearest neighbor contact energies are chosen to yield, in the absence of an adsorbing surface, a unique 3x3x3 folded structure. The surface is a plane of sites interacting either equally with A and B segments (equal affinity surface) or more strongly with the A segments (A affinity surface). We use a multicanonical Monte Carlo algorithm, with configuration bias and jump walking moves, featuring an iteratively updated sampling function that converges to the reciprocal of the density of states 1/Omega(E), E being the potential energy. We find inflection points in the configurational entropy, S(E)=k ln Omega(E), for all but a strongly adsorbing equal affinity surface, indicating the presence of free energy barriers to transition. When protein-surface interactions are weak, the free energy profiles F(E)=E-TS(E) qualitatively resemble those of a protein in the absence of a surface: a free energy barrier separates a folded, lowest energy state from globular, higher energy states. The surface acts in this case to stabilize the globular states relative to the folded state. When the protein surface interactions are stronger, the situation differs markedly: the folded state no longer occurs at the lowest energy and free energy barriers may be absent altogether.
NASA Astrophysics Data System (ADS)
Arifin, P.; Goldys, E.; Tansley, T. L.
1995-08-01
We present a method of simulating the electron transport in low-temperature-grown GaAs by the Monte Carlo method. Low-temperature-grown GaAs contains microscopic inclusions of As and these inhomogeneities render impossible the standard Monte Carlo mobility simulations. Our method overcomes this difficulty and allows the quantitative prediction of electron transport on the basis of principal microscopic material parameters, including the impurity and the precipitate concentrations and the precipitate size. The adopted approach involves simulations of a single electron trajectory in real space, while the influence of As precipitates on the GaAs matrix is treated in the framework of a Schottky-barrier model. The validity of this approach is verified by evaluation of the drift velocity in homogeneous GaAs where excellent agreement with other workers' results is reached. The drift velocity as a function of electric field in low-temperature-grown GaAs is calculated for a range of As precipitate concentrations. Effect of compensation ratio on drift velocity characteristics is also investigated. It is found that the drift velocity is reduced and the electric field at which the onset of the negative differential mobility occurs increases as the precipitate concentration increases. Both these effects are related to the reduced electron mean free path in the presence of precipitates. Additionally, comparatively high low-field electron mobilities in this material are theoretically explained.
Al-Subeihi, Ala A A; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; van Bladeren, Peter J; Rietjens, Ivonne M C M; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1'-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1'-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1'-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1'-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1'-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1'-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment.
NASA Astrophysics Data System (ADS)
Matsumoto, Hiroaki
2002-12-01
The variable sphere (VS) molecular model for the Monte Carlo simulation of rarefied gas flow is introduced to provide consistency for diffusion and viscosity cross-sections with those of any realistic intermolecular potential. It is then applied to the inverse power law (IPL) and Lennard-Jones (LJ) potentials. The VS model has a much simpler scattering law than either the variable hard sphere (VHS) or variable soft sphere (VSS) models; also, it has almost the same computational efficiency as the VHS and VSS models. A simulation of velocity relaxation in a homogeneous space and two comparative simulations of molecular diffusion in a homogeneous heat-bath gas and normal shock wave structure in a monatomic gas are made to examine VS model validity. The relaxation to a Maxwellian distribution function and equipartition between all degrees of freedom are well established; good agreement is shown in the molecular diffusion and shock wave structure between the VS model and the IPL and LJ potentials. The VS model is combined with the statistical inelastic cross-section (SICS) model and applied to simulation of translational and rotational energy relaxation in a homogeneous space. The VS model shows the relaxation of Maxwellian and Boltzmann distribution functions and equipartition between all degrees of freedom. Comparative calculation between the VS model with the SICS (VS-SICS) model and the VSS model with the SICS (VSS-SICS) model is made for rotational relaxation in a nitrogen normal shock wave. Good agreement is shown in the shock wave structure and rotational energy distribution function between the VS-SICS model and the VSS-SICS model. This study demonstrates that diffusion and viscosity cross-sections, rather than the scattering law of each molecular collision, affect macroscopic transport phenomena.
NASA Astrophysics Data System (ADS)
Cheng, Guoxin; Liu, Lie
2011-06-01
Based on Vaughan's empirical formula of secondary emission yield and the assumption of mutual exclusion of each type of secondary electron, a mathematically self-consistent secondary emission model is proposed. It identifies each generated secondary electron as either elastic reflected, rediffused, or true secondary, hence, it allows the use of distinct emission energy and angular distributions of each type of electron. Monte Carlo modeling of the developed model is presented, and second-order algorithms for particle collection and ejection at the secondary-emission wall are developed in order to incorporate the secondary electron emission process in the standard leap-frog integrator. The accuracy of these algorithms is analyzed for general fields and is confirmed by comparing the numerically computed values with the exact solution under a homogeneous magnetic field. In particular, the phenomenon of multipactor electron discharge on a dielectric is simulated to verify the usefulness of the model developed in this paper.
Monte Carlo Simulation of an Arc Therapy Treatment by Means of a PC Distribution Model
NASA Astrophysics Data System (ADS)
Leal, A.; Sánchez-Doblado, F.; Perucha, M.; Rincón, M.; Arráns, R.; Bernal, C.; Carrasco, E.
It would be always desirable to have an independent assessment of a planning system. Monte Carlo (MC) offers an accurate way of checking dose distribution in non homogeneous volumes. Nevertheless, its main drawback is the long processing times needed.
Monte Carlo simulated coronary angiograms of realistic anatomy and pathology models
NASA Astrophysics Data System (ADS)
Kyprianou, Iacovos S.; Badal, Andreu; Badano, Aldo; Banh, Diemphuc; Freed, Melanie; Myers, Kyle J.; Thompson, Laura
2007-03-01
We have constructed a fourth generation anthropomorphic phantom which, in addition to the realistic description of the human anatomy, includes a coronary artery disease model. A watertight version of the NURBS-based Cardiac-Torso (NCAT) phantom was generated by converting the individual NURBS surfaces of each organ into closed, manifold and non-self-intersecting tessellated surfaces. The resulting 330 surfaces of the phantom organs and tissues are now comprised of ~5×10 6 triangles whose size depends on the individual organ surface normals. A database of the elemental composition of each organ was generated, and material properties such as density and scattering cross-sections were defined using PENELOPE. A 300 μm resolution model of a heart with 55 coronary vessel segments was constructed by fitting smooth triangular meshes to a high resolution cardiac CT scan we have segmented, and was consequently registered inside the torso model. A coronary artery disease model that uses hemodynamic properties such as blood viscosity and resistivity was used to randomly place plaque within the artery tree. To generate x-ray images of the aforementioned phantom, our group has developed an efficient Monte Carlo radiation transport code based on the subroutine package PENELOPE, which employs an octree spatial data-structure that stores and traverses the phantom triangles. X-ray angiography images were generated under realistic imaging conditions (90 kVp, 10° Wanode spectra with 3 mm Al filtration, ~5×10 11 x-ray source photons, and 10% per volume iodine contrast in the coronaries). The images will be used in an optimization algorithm to select the optimal technique parameters for a variety of imaging tasks.
MBR Monte Carlo Simulation in PYTHIA8
NASA Astrophysics Data System (ADS)
Ciesielski, R.
We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.
Saloranta, Tuomo M; Armitage, James M; Haario, Heikki; Naes, Kristoffer; Cousins, Ian T; Barton, David N
2008-01-01
Multimedia environmental fate models are useful tools to investigate the long-term impacts of remediation measures designed to alleviate potential ecological and human health concerns in contaminated areas. Estimating and communicating the uncertainties associated with the model simulations is a critical task for demonstrating the transparency and reliability of the results. The Extended Fourier Amplitude Sensitivity Test(Extended FAST) method for sensitivity analysis and Bayesian Markov chain Monte Carlo (MCMC) method for uncertainty analysis and model calibration have several advantages over methods typically applied for multimedia environmental fate models. Most importantly, the simulation results and their uncertainties can be anchored to the available observations and their uncertainties. We apply these techniques for simulating the historical fate of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the Grenland fjords, Norway, and for predicting the effects of different contaminated sediment remediation (capping) scenarios on the future levels of PCDD/Fs in cod and crab therein. The remediation scenario simulations show that a significant remediation effect can first be seen when significant portions of the contaminated sediment areas are cleaned up, and that increase in capping area leads to both earlier achievement of good fjord status and narrower uncertainty in the predicted timing for this.
Morton, April M; McManamay, Ryan A; Nagle, Nicholas N; Piburn, Jesse O; Stewart, Robert N; Surendran Nair, Sujithkumar
2016-01-01
Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Reliability of Monte Carlo simulations in modeling neutron yields from a shielded fission source
NASA Astrophysics Data System (ADS)
McArthur, Matthew S.; Rees, Lawrence B.; Czirr, J. Bart
2016-08-01
Using the combination of a neutron-sensitive 6Li glass scintillator detector with a neutron-insensitive 7Li glass scintillator detector, we are able to make an accurate measurement of the capture rate of fission neutrons on 6Li. We used this detector with a 252Cf neutron source to measure the effects of both non-borated polyethylene and 5% borated polyethylene shielding on detection rates over a range of shielding thicknesses. Both of these measurements were compared with MCNP calculations to determine how well the calculations reproduced the measurements. When the source is highly shielded, the number of interactions experienced by each neutron prior to arriving at the detector is large, so it is important to compare Monte Carlo modeling with actual experimental measurements. MCNP reproduces the data fairly well, but it does generally underestimate detector efficiency both with and without polyethylene shielding. For non-borated polyethylene it underestimates the measured value by an average of 8%. This increases to an average of 11% for borated polyethylene.
Makeev, Alexei G; Kurkina, Elena S; Kevrekidis, Ioannis G
2012-06-01
Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.
NASA Astrophysics Data System (ADS)
Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Timmerman, Robert; Randall, Marcus; DesRosiers, Paul
2002-06-01
The Monte Carlo code PENELOPE has been used to simulate photon flux from the Leksell Gamma Knife®, a precision method for treating intracranial lesions. Radiation from a single 60Co assembly traversing the collimator system was simulated, and phase space distributions at the output surface of the helmet for photons and electrons were calculated. The characteristics describing the emitted final beam were used to build a two-stage Monte Carlo simulation of irradiation of a target. A dose field inside a standard spherical polystyrene phantom, usually used for Gamma Knife® dosimetry, has been computed and compared with experimental results, with calculations performed by other authors with the use of the EGS4 Monte Carlo code, and data provided by the treatment planning system Gamma Plan®. Good agreement was found between these data and results of simulations in homogeneous media. Owing to this established accuracy, PENELOPE is suitable for simulating problems relevant to stereotactic radiosurgery.
Liu, Changzheng; Lin, Zhenhong
2016-12-08
Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market share variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.
Liu, Changzheng; Oak Ridge National Lab.; Lin, Zhenhong; ...
2016-12-08
Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market sharemore » variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.« less
Assessment of high-fidelity collision models in the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Weaver, Andrew B.
Advances in computer technology over the decades has allowed for more complex physics to be modeled in the DSMC method. Beginning with the first paper on DSMC in 1963, 30,000 collision events per hour were simulated using a simple hard sphere model. Today, more than 10 billion collision events can be simulated per hour for the same problem. Many new and more physically realistic collision models such as the Lennard-Jones potential and the forced harmonic oscillator model have been introduced into DSMC. However, the fact that computer resources are more readily available and higher-fidelity models have been developed does not necessitate their usage. It is important to understand how such high-fidelity models affect the output quantities of interest in engineering applications. The effect of elastic and inelastic collision models on compressible Couette flow, ground-state atomic oxygen transport properties, and normal shock waves have therefore been investigated. Recommendations for variable soft sphere and Lennard-Jones model parameters are made based on a critical review of recent ab-initio calculations and experimental measurements of transport properties.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
NASA Astrophysics Data System (ADS)
Mankodi, T. K.; Bhandarkar, U. V.; Puranik, B. P.
2017-08-01
A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N2+N2→N2+2 N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.
Mankodi, T K; Bhandarkar, U V; Puranik, B P
2017-08-28
A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N2+N2→N2+2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.
Zhong, Xiewei; Wen, Xiang; Zhu, Dan
2014-01-27
Fiber reflectance spectroscopy is a non-invasive method for diagnosing skin diseases or evaluating aesthetic efficacy, but it is dependent on the inverse model validity. In this work, a lookup-table-based inverse model is developed using two-layered Monte Carlo simulations in order to extract the physiological and optical properties of skin. The melanin volume fraction and blood oxygen parameters are extracted from fiber reflectance spectra of in vivo human skin. The former indicates good coincidence with a commercial skin-melanin probe, and the latter (based on forearm venous occlusion and ischemia, and hot compress experiment) shows that the measurements are in agreement with physiological changes. These results verify the potential of this spectroscopy technique for evaluating the physiological characteristics of human skin.
NASA Astrophysics Data System (ADS)
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Ye, Hong-Zhou; Sun, Chong; Jiang, Hong
2015-03-14
Materials with spin-crossover (SCO) properties hold great potential in information storage and therefore have received a lot of concerns in recent decades. The hysteresis phenomena accompanying SCO are attributed to the intermolecular cooperativity whose underlying mechanism may have a vibronic origin. In this work, a new vibronic Ising-like model in which the elastic coupling between SCO centers is included by considering harmonic stretching and bending (SAB) interactions is proposed and solved by Monte Carlo (MC) simulations. The key parameters in the new model, k1 and k2, corresponding to the elastic constant of the stretching and bending mode, respectively, can be directly related to the macroscopic bulk and shear modulus of the material of study, which can be readily estimated either based on experimental measurements or first-principles calculations. Using realistic parameters estimated based on density-functional theory calculations of a specific polymeric coordination SCO compound, [Fe(pz)Pt(CN)4]·2H2O (pz = pyrazine), temperature-induced hysteresis and pressure effects on SCO phenomena are simulated successfully. Our MC simulations shed light on the role of the vibronic couplings in the thermal hysteresis of SCO systems, and also point out the limitations of highly simplified Ising-like models for quantitative description of real SCO systems, which will be of great value for the development of more realistic SCO models.
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S
2015-12-07
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Monte Carlo Simulation Modeling of a Regional Stroke Team’s Use of Telemedicine
Torabi, Elham; Froehle, Craig M.; Lindsell, Chris J.; Moomaw, Charles J.; Kanter, Daniel; Kleindorfer, Dawn; Adeoye, Opeolu
2015-01-01
Objectives The objective of this study was to evaluate operational policies that may improve the proportion of eligible stroke patients within a population who would receive intravenous recombinant tissue plasminogen activator (rt-PA), and minimize time to treatment in eligible patients. Methods In the context of a regional stroke team, the authors examined the effects of staff location and telemedicine deployment policies on the timeliness of thrombolytic treatment, and estimated the efficacy and cost-effectiveness of six different policies. A process map comprising the steps from recognition of stroke symptoms to intravenous administration of rt-PA was constructed using data from published literature combined with expert opinion. Six scenarios were investigated: telemedicine deployment (none, all, or outer-ring hospitals only); and, staff location (center of region or anywhere in region). Physician locations were randomly generated based on their zip codes of residence and work. The outcomes of interest were onset-to-treatment (OTT) time, door-to-needle (DTN) time, and the proportion of patients treated within three hours. A Monte Carlo simulation of the stroke team care-delivery system was constructed based on a primary dataset of 121 ischemic stroke patients who were potentially eligible for treatment with rt-PA. Results With the physician located randomly in the region, deploying telemedicine at all hospitals in the region (compared with partial or no telemedicine) would result in the highest rates of treatment within three hours (80% vs. 75% vs. 70%) and the shortest OTT (148 vs. 164 vs. 176 minutes), and DTN (45 vs. 61 vs. 73 minutes) times. However, locating the on-call physician centrally coupled with partial telemedicine deployment (five of the 17 hospitals) would be most cost-effective with comparable eligibility and treatment times. Conclusions Given the potential societal benefits, continued efforts to deploy telemedicine appear warranted. Aligning the
Monte Carlo Simulation Modeling of a Regional Stroke Team's Use of Telemedicine.
Torabi, Elham; Froehle, Craig M; Lindsell, Christopher J; Moomaw, Charles J; Kanter, Daniel; Kleindorfer, Dawn; Adeoye, Opeolu
2016-01-01
The objective of this study was to evaluate operational policies that may improve the proportion of eligible stroke patients within a population who would receive intravenous recombinant tissue plasminogen activator (rt-PA) and minimize time to treatment in eligible patients. In the context of a regional stroke team, the authors examined the effects of staff location and telemedicine deployment policies on the timeliness of thrombolytic treatment, and estimated the efficacy and cost-effectiveness of six different policies. A process map comprising the steps from recognition of stroke symptoms to intravenous administration of rt-PA was constructed using data from published literature combined with expert opinion. Six scenarios were investigated: telemedicine deployment (none, all, or outer-ring hospitals only) and staff location (center of region or anywhere in region). Physician locations were randomly generated based on their zip codes of residence and work. The outcomes of interest were onset-to-treatment (OTT) time, door-to-needle (DTN) time, and the proportion of patients treated within 3 hours. A Monte Carlo simulation of the stroke team care-delivery system was constructed based on a primary data set of 121 ischemic stroke patients who were potentially eligible for treatment with rt-PA. With the physician located randomly in the region, deploying telemedicine at all hospitals in the region (compared with partial or no telemedicine) would result in the highest rates of treatment within 3 hours (80% vs. 75% vs. 70%) and the shortest OTT (148 vs. 164 vs. 176 minutes) and DTN (45 vs. 61 vs. 73 minutes) times. However, locating the on-call physician centrally coupled with partial telemedicine deployment (five of the 17 hospitals) would be most cost-effective with comparable eligibility and treatment times. Given the potential societal benefits, continued efforts to deploy telemedicine appear warranted. Aligning the incentives between those who would have to fund
Leasing policy and the rate of petroleum development: analysis with a Monte Carlo simulation model
Abbey, D; Bivins, R
1982-03-01
The study has two objectives: first, to consider whether alternative leasing systems are desirable to speed the rate of oil and gas exploration and development in frontier basins; second, to evaluate the Petroleum Activity and Decision Simulation model developed by the US Department of the Interior for economic and land use planning and for policy analysis. Analysis of the model involved structural variation of the geology, exploration, and discovery submodels and also involved a formal sensitivity analysis using the Latin Hypercube Sampling Method. We report the rate of exploration, discovery, and petroleum output under a variety of price, leasing policy, and tax regimes.
Cluster hybrid Monte Carlo simulation algorithms.
Plascak, J A; Ferrenberg, Alan M; Landau, D P
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Cluster hybrid Monte Carlo simulation algorithms
NASA Astrophysics Data System (ADS)
Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Hanford, Amanda D; O'Connor, Patrick D; Anderson, James B; Long, Lyle N
2008-06-01
In the current study, real gas effects in the propagation of sound waves are simulated using the direct simulation Monte Carlo method for a wide range of frequencies. This particle method allows for treatment of acoustic phenomena at high Knudsen numbers, corresponding to low densities and a high ratio of the molecular mean free path to wavelength. Different methods to model the internal degrees of freedom of diatomic molecules and the exchange of translational, rotational and vibrational energies in collisions are employed in the current simulations of a diatomic gas. One of these methods is the fully classical rigid-rotor/harmonic-oscillator model for rotation and vibration. A second method takes into account the discrete quantum energy levels for vibration with the closely spaced rotational levels classically treated. This method gives a more realistic representation of the internal structure of diatomic and polyatomic molecules. Applications of these methods are investigated in diatomic nitrogen gas in order to study the propagation of sound and its attenuation and dispersion along with their dependence on temperature. With the direct simulation method, significant deviations from continuum predictions are also observed for high Knudsen number flows.
Sweeney, L M; Tyler, T R; Kirman, C R; Corley, R A; Reitz, R H; Paustenbach, D J; Holson, J F; Whorton, M D; Thompson, K M; Gargas, M L
2001-07-01
Methoxyethanol (ethylene glycol monomethyl ether, EGME), ethoxyethanol (ethylene glycol monoethyl ether, EGEE), and ethoxyethyl acetate (ethylene glycol monoethyl ether acetate, EGEEA) are all developmental toxicants in laboratory animals. Due to the imprecise nature of the exposure data in epidemiology studies of these chemicals, we relied on human and animal pharmacokinetic data, as well as animal toxicity data, to derive 3 occupational exposure limits (OELs). Physiologically based pharmacokinetic (PBPK) models for EGME, EGEE, and EGEEA in pregnant rats and humans have been developed (M. L. Gargas et al., 2000, Toxicol. Appl. Pharmacol. 165, 53-62; M. L. Gargas et al., 2000, Toxicol. Appl. Pharmacol. 165, 63-73). These models were used to calculate estimated human-equivalent no adverse effect levels (NAELs), based upon internal concentrations in rats exposed to no observed effect levels (NOELs) for developmental toxicity. Estimated NAEL values of 25 ppm for EGEEA and EGEE and 12 ppm for EGME were derived using average values for physiological, thermodynamic, and metabolic parameters in the PBPK model. The uncertainties in the point estimates for the NOELs and NAELs were estimated from the distribution of internal dose estimates obtained by varying key parameter values over expected ranges and probability distributions. Key parameters were identified through sensitivity analysis. Distributions of the values of these parameters were sampled using Monte Carlo techniques and appropriate dose metrics calculated for 1600 parameter sets. The 95th percentile values were used to calculate interindividual pharmacokinetic uncertainty factors (UFs) to account for variability among humans (UF(h,pk)). These values of 1.8 for EGEEA/EGEE and 1.7 for EGME are less than the default value of 3 for this area of uncertainty. The estimated human equivalent NAELs were divided by UF(h,pk) and the default UFs for pharmacodynamic variability among animals and among humans to calculate the
NASA Astrophysics Data System (ADS)
Beyerlein, Irene Jane
Many next generation, structural composites are likely to be engineered from stiff fibers embedded in ceramic, metallic, or polymeric matrices. Ironically, complexity in composite failure response, rendering them superior to traditional materials, also makes them difficult to characterize for high reliability design. Challenges lie in modeling the interacting, randomly evolving micromechanical damage, such as fiber break nucleation and coalescence, and in the fact that strength, lifetime, and failure mode vary substantially between otherwise identical specimens. My thesis research involves developing (i) computational, micromechanical stress transfer models around multiple fiber breaks in fiber composites, (ii) Monte Carlo simulation models to reproduce their failure process, and (iii) interpretative probability models. In Chapter 1, a Monte Carlo model is developed to study the effects of fiber strength statistics on the fracture process and strength distribution of unnotched and notched N elastic composite laminae. The simulation model couples a micromechanical stress analysis, called break influence superposition, and Weibull fiber strengths, wherein fiber strength varies negligibly along fiber length. Examination of various statistical aspects of composite failure reveals mechanisms responsible for flaw intolerance in the short notch regime and for toughness in the long notch regime. Probability models and large N approximations are developed in Chapter 2 to model the effects of variation in fiber strength on statistical composite fracture response. Based on the probabilities of simple sequences of failure events, probability models for crack and distributed cluster growth and fracture resistance are developed. Comparisons with simulation results show that these models and approximations successfully predicted the unnotched and notched composite strength distributions and that fracture toughness grows slowly as (1nN)sp{1/gamma}, where gamma is the fiber Weibull
NASA Astrophysics Data System (ADS)
Giura, Stefano; Schoen, Martin
2014-08-01
We consider the phase behavior of a simple model of a liquid crystal by means of modified mean-field density-functional theory (MMF DFT) and Monte Carlo simulations in the grand canonical ensemble (GCEMC). The pairwise additive interactions between liquid-crystal molecules are modeled via a Lennard-Jones potential in which the attractive contribution depends on the orientation of the molecules. We derive the form of this orientation dependence through an expansion in terms of rotational invariants. Our MMF DFT predicts two topologically different phase diagrams. At weak to intermediate coupling of the orientation dependent attraction, there is a discontinuous isotropic-nematic liquid-liquid phase transition in addition to the gas-isotropic liquid one. In the limit of strong coupling, the gas-isotropic liquid critical point is suppressed in favor of a fluid- (gas- or isotropic-) nematic phase transition which is always discontinuous. By considering three representative isotherms in parallel GCEMC simulations, we confirm the general topology of the phase diagram predicted by MMF DFT at intermediate coupling strength. From the combined MMF DFT-GCEMC approach, we conclude that the isotropic-nematic phase transition is very weakly first order, thus confirming earlier computer simulation results for the same model [see M. Greschek and M. Schoen, Phys. Rev. E 83, 011704 (2011), 10.1103/PhysRevE.83.011704].
Giura, Stefano; Schoen, Martin
2014-08-01
We consider the phase behavior of a simple model of a liquid crystal by means of modified mean-field density-functional theory (MMF DFT) and Monte Carlo simulations in the grand canonical ensemble (GCEMC). The pairwise additive interactions between liquid-crystal molecules are modeled via a Lennard-Jones potential in which the attractive contribution depends on the orientation of the molecules. We derive the form of this orientation dependence through an expansion in terms of rotational invariants. Our MMF DFT predicts two topologically different phase diagrams. At weak to intermediate coupling of the orientation dependent attraction, there is a discontinuous isotropic-nematic liquid-liquid phase transition in addition to the gas-isotropic liquid one. In the limit of strong coupling, the gas-isotropic liquid critical point is suppressed in favor of a fluid- (gas- or isotropic-) nematic phase transition which is always discontinuous. By considering three representative isotherms in parallel GCEMC simulations, we confirm the general topology of the phase diagram predicted by MMF DFT at intermediate coupling strength. From the combined MMF DFT-GCEMC approach, we conclude that the isotropic-nematic phase transition is very weakly first order, thus confirming earlier computer simulation results for the same model [see M. Greschek and M. Schoen, Phys. Rev. E 83, 011704 (2011)].
Development of Monte Carlo Capability for Orion Parachute Simulations
NASA Technical Reports Server (NTRS)
Moore, James W.
2011-01-01
Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
NASA Astrophysics Data System (ADS)
Llano-Restrepo, Mario Andres
A study of concentrated aqueous alkali halide solutions is made at the molecular level, through modeling and computer simulation of their structural and thermodynamic properties. It is found that the HNC approximation is the best integral equation theory to predict such properties within the framework of the primitive model (PM). The intrinsic limitations of the PM in describing ionic association and hydration effects are addressed and discussed in order to emphasize the need for explicitly including the water molecules in the treatment of aqueous electrolyte solutions by means of a civilized model (CM). As a step toward developing a CM as simple as possible, it is shown that a modified version of the SPC model of liquid water in which the Lennard-Jones interaction between intermolecular oxygen sites is replaced by a hard core interaction, is still successful enough to predict the degree of hydrogen bonding of real water. A simple civilized model (SCM) (in which the ions are treated as hard spheres interacting through Coulombic potentials and the water molecules are simulated using the simplified SPC model) is introduced in order to study the changes in the structural features of various aqueous alkali halide solutions upon varying both the concentration and the size of the ions. Both cations and anions are found to be solvated by the water molecules at expense of a breakdown in the hydrogen-bonded water network. Hydration numbers are reported for the first time for NaBr and KBr, and the first simulation -based estimates for LiBr, NaI and KI are also obtained. In several cases, values of the hydration numbers based on the SCM are found to be in excellent agreement with available experimental results obtained from x-ray diffraction measurements. Finally, it is shown that a neoprimitive model (NPM) can be developed by incorporating some of the structural features seen in the SCM into the short-range part of the PM interionic potential via a shielded square well whose
Crystal nuclei in melts: a Monte Carlo simulation of a model for attractive colloids
NASA Astrophysics Data System (ADS)
Statt, Antonia; Virnau, Peter; Binder, Kurt
2015-09-01
As a model for a suspension of hard-sphere-like colloidal particles where small non-adsorbing dissolved polymers create a depletion attraction, we introduce an effective colloid-colloid potential closely related to the Asakura-Oosawa model, but that does not have any discontinuities. In simulations, this model straightforwardly allows the calculation of the pressure from the virial formula, and the phase transition in the bulk from the liquid to crystalline solid can be accurately located from a study where a stable coexistence of a crystalline slab with a surrounding liquid phase occurs. For this model, crystalline nuclei surrounded by fluid are studied both by identifying the crystal-fluid interface on the particle level (using suitable bond orientational order parameters to distinguish the phases) and by 'thermodynamic' means, i.e. the latter method amounts to compute the enhancement of chemical potential and pressure relative to their coexistence values. We show that the chemical potential can be obtained from simulating thick films, where one wall with a rather long-range repulsion is present, since near this wall, the Widom particle insertion method works, exploiting the fact that the chemical potential in the system is homogeneous. Finally, the surface excess free energy of the nucleus is obtained, for a wide range of nuclei volumes. From this method, it is established that classical nucleation theory works, showing that for the present model, the anisotropy of the interface excess free energy of crystals and their resulting non-spherical shape has only a very small effect on the barrier.
NASA Astrophysics Data System (ADS)
Jabar, A.; Masrour, R.
2017-10-01
The magnetic properties of mixed spins-5/2 and -2 Ising model on the two alternative layers on a honeycomb lattice are investigated and are discussed using the Monte Carlo simulations. The different ground-state phase diagrams are found. The eight magnetic stable phases are obtained for a fixed magnetic parameters. The critical temperatures are deduced for different layers. The system exhibits the second order and first order phases transitions. The plateau obtained between the high temperature phase and low-temperature phase at critical temperature can be explain by first-order transition. The second order transitions also show continuity in order magnetization at the reduced critical temperatures. The variation of magnetizations with exchange interactions and crystal field in each layer has been given. Magnetic hysteresis cycle has been obtained for different layers, temperatures and crystal fields.
Multi-GPU accelerated multi-spin Monte Carlo simulations of the 2D Ising model
NASA Astrophysics Data System (ADS)
Block, Benjamin; Virnau, Peter; Preis, Tobias
2010-09-01
A Modern Graphics Processing unit (GPU) is able to perform massively parallel scientific computations at low cost. We extend our implementation of the checkerboard algorithm for the two-dimensional Ising model [T. Preis et al., Journal of Chemical Physics 228 (2009) 4468-4477] in order to overcome the memory limitations of a single GPU which enables us to simulate significantly larger systems. Using multi-spin coding techniques, we are able to accelerate simulations on a single GPU by factors up to 35 compared to an optimized single Central Processor Unit (CPU) core implementation which employs multi-spin coding. By combining the Compute Unified Device Architecture (CUDA) with the Message Parsing Interface (MPI) on the CPU level, a single Ising lattice can be updated by a cluster of GPUs in parallel. For large systems, the computation time scales nearly linearly with the number of GPUs used. As proof of concept we reproduce the critical temperature of the 2D Ising model using finite size scaling techniques.
Sutton, Steven C; Hu, Mingxiu
2006-05-05
Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.
NASA Astrophysics Data System (ADS)
Wüst, Thomas; Hulliger, Jürg
2005-02-01
A layer-by-layer growth model is presented for the theoretical investigation of growth-induced polarity formation in solid solutions H1-XGX of polar (H) and nonpolar (G) molecules (X: molar fraction of G molecules in the solid, 0
McCreddin, A; Alam, M S; McNabola, A
2015-01-01
An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies.
NASA Astrophysics Data System (ADS)
Wang, Wenjing; Qiu, Rui; Ren, Li; Liu, Huan; Wu, Zhen; Li, Chunyan; Li, Junli
2017-09-01
Mean glandular dose (MGD) is not only determined by the compressed breast thickness (CBT) and the glandular content, but also by the distribution of glandular tissues in breast. Depth dose inside the breast in mammography has been widely concerned as glandular dose decreases rapidly with increasing depth. In this study, an experiment using thermo luminescent dosimeters (TLDs) was carried out to validate Monte Carlo simulations of mammography. Percent depth doses (PDDs) at different depth values were measured inside simple breast phantoms of different thicknesses. The experimental values were well consistent with the values calculated by Geant4. Then a detailed breast model with a CBT of 4 cm and a glandular content of 50%, which has been constructed in previous work, was used to study the effects of the distribution of glandular tissues in breast with Geant4. The breast model was reversed in direction of compression to get a reverse model with a different distribution of glandular tissues. Depth dose distributions and glandular tissue dose conversion coefficients were calculated. It revealed that the conversion coefficients were about 10% larger when the breast model was reversed, for glandular tissues in the reverse model are concentrated in the upper part of the model.
Lee, C; Lin, H; Chao, T; Hsiao, I; Chuang, K
2015-06-15
Purpose: Predicted PET images on the basis of analytical filtering approach for proton range verification has been successful developed and validated using FLUKA Monte Carlo (MC) codes and phantom measurements. The purpose of the study is to validate the effectiveness of analytical filtering model for proton range verification on GATE/GEANT4 Monte Carlo simulation codes. Methods: In this study, we performed two experiments for validation of predicted β+-isotope by the analytical model with GATE/GEANT4 simulations. The first experiments to evaluate the accuracy of predicting β+-yields as a function of irradiated proton energies. In second experiment, we simulate homogeneous phantoms of different materials irradiated by a mono-energetic pencil-like proton beam. The results of filtered β+-yields distributions by the analytical model is compared with those of MC simulated β+-yields in proximal and distal fall-off ranges. Results: The results investigate the distribution between filtered β+-yields and MC simulated β+-yields distribution in different conditions. First, we found that the analytical filtering can be applied over the whole range of the therapeutic energies. Second, the range difference between filtered β+-yields and MC simulated β+-yields at the distal fall-off region are within 1.5mm for all materials used. The findings validated the usefulness of analytical filtering model on range verification of proton therapy on GATE Monte Carlo simulations. In addition, there is a larger discrepancy between filtered prediction and MC simulated β+-yields using GATE code, especially in proximal region. This discrepancy might Result from the absence of wellestablished theoretical models for predicting the nuclear interactions. Conclusion: Despite the fact that large discrepancies of the distributions between MC-simulated and predicted β+-yields were observed, the study prove the effectiveness of analytical filtering model for proton range verification using
NASA Astrophysics Data System (ADS)
Sinha, Indrajit; Mukherjee, Ashim K.
2014-03-01
The oxidation of CO on Pt-group metal surfaces has attracted widespread attention since a long time due to its interesting oscillatory kinetics and spatiotemporal behavior. The use of STM in conjunction with other experimental data has confirmed the validity of the surface reconstruction (SR) model under low pressure and the more recent surface oxide (SO) model which is possible under sub-atmospheric pressure conditions [1]. In the SR model the surface is periodically reconstructed below a certain low critical CO-coverage and this reconstruction is lifted above a second, higher critical CO-coverage. Alternatively the SO model proposes periodic switching between a low-reactivity metallic surface and a high-reactivity oxide surface. Here we present an overview of our recent kinetic Monte Carlo (KMC) simulation studies on the oscillatory kinetics of surface catalyzed CO oxidation. Different modifications of the lattice gas Ziff-Gulari-Barshad (ZGB) model have been utilized or proposed for this purpose. First we present the effect of desorption on the ZGB reactive to poisoned irreversible phase transition in the SR model. Next we discuss our recent research on KMC simulation of the SO model. The ZGB framework is utilized to propose a new model incorporating not only the standard Langmuir-Hinshelwood (LH) mechanism, but also introducing the Mars-van Krevelen (MvK) mechanism for the surface oxide phase [5]. Phase diagrams, which are plots between long time averages of various oscillating quantities against the normalized CO pressure, show two or three transitions depending on the CO coverage critical threshold (CT) value beyond which all adsorbed oxygen atoms are converted to surface oxide.
Ethayaraja, M; Dutta, Kanchan; Bandyopadhyaya, Rajdip
2006-08-24
Modeling the nanoparticle formation mechanism in water-in-oil microemulsion, a self-assembled colloidal template, has been addressed in this paper by two formalisms: the deterministic population balance equation (PBE) model and stochastic Monte Carlo (MC) simulation. These are based on time-scale analysis of elementary events consisting of reactant mass transport, solid solubilization, reaction, coalescence-exchange of drops, and finally nucleation and growth of nanoparticles. For the first time in such a PBE model, realistic binomial redistribution of molecules in the daughter drops (after coalescence-exchange of two drops) has been explicitly implemented. This has resulted in a very general model, applicable to processes with arbitrary relative rates of coalescence-exchange and nucleation. Both the deterministic and stochastic routes could account for the inherent randomness in the elementary events and successfully explained temporal evolution of mean and variance of nanoparticle size distribution. This has been illustrated by comparison with different yet broadly similar experiments, operating either under coalescence (lime carbonation to make CaCO(3) nanoparticles) or nucleation (hydride hydrolysis to make Ca(OH)(2) nanoparticles) dominant regimes. Our calculations are robust in being able to predict for very diverse process operation times: up to 26 min and 5 h for carbonation and hydrolysis experiments, respectively. Model predictions show that an increase in the external reactant addition rate to microemulsion solution is beneficial under certain general conditions, increasing the nanoparticle production rate significantly without any undesirable and perceptible change in particle size.
Litaize, O.; Serot, O.
2010-11-15
A Monte Carlo simulation of the fission fragment deexcitation process was developed in order to analyze and predict postfission-related nuclear data which are of crucial importance for basic and applied nuclear physics. The basic ideas of such a simulation were already developed in the past. In the present work, a refined model is proposed in order to make a reliable description of the distributions related to fission fragments as well as to prompt neutron and {gamma} energies and multiplicities. This refined model is mainly based on a mass-dependent temperature ratio law used for the initial excitation energy partition of the fission fragments and a spin-dependent excitation energy limit for neutron emission. These phenomenological improvements allow us to reproduce with a good agreement the {sup 252}Cf(sf) experimental data on prompt fission neutron multiplicity {nu}(A), {nu}(TKE), the neutron multiplicity distribution P({nu}), as well as their energy spectra N(E), and lastly the energy release in fission.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
NASA Astrophysics Data System (ADS)
Munaò, G.; Costa, D.; Caccamo, C.
2009-04-01
We revisit the thermodynamic and structural properties of fluids of homonuclear hard dumbbells in the framework provided by the reference interaction site model (RISM) theory of molecular fluids. Besides the previously investigated Percus-Yevick (PY) approximation, we test the accuracy of other closures to the RISM equations, imported from the theory of simple fluids; specifically, we study the hypernetted chain (HNC), the modified HNC (MHNC) and, less extensively, the Verlet approximations. We implement our approach for models characterized by several different elongations, up to the case of tangent diatomics, and investigate the whole fluid density range. The theoretical predictions are assessed against Monte Carlo simulations, either available from literature or newly generated by us. The HNC and PY equations of state, calculated via different routes, share on the whole the same level of accuracy. The MHNC is applied by enforcing an internal thermodynamic consistency constraint, leading to good predictions for the equation of state as the elongation of the dumbbell increases. As for the radial distribution function, the MHNC appears superior to other theories, especially for tangent diatomics in the high density limit; the PY approximation is better than the HNC and Verlet closures in the high density or elongation regime. Our structural analysis is supplemented by an accurate inversion procedure to reconstruct from Monte Carlo data and RISM the "exact" direct correlation function. In agreement with such calculations and consistent with the forecast of rigorous diagrammatic analysis, all theories predict the occurrence in the direct correlation function of a first cusp inside the dumbbell core and (with the obvious exception of the PY) of a second cusp outside; the cusps' heights are also qualitatively well reproduced by the theories, except at high densities.
Li, Jun; Calo, Victor M.
2013-09-15
We present a single-particle Lennard–Jones (L-J) model for CO{sub 2} and N{sub 2}. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO{sub 2} and N{sub 2} agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH{sub 4}, CO{sub 2} and N{sub 2} are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available.
Duda, Yurko; Romero-Martínez, Ascención; Orea, Pedro
2007-06-14
The liquid-vapor phase diagram and surface tension for hard-core Yukawa potential with 4
Zhou, X. W.; Yang, N. Y. C.
2014-03-14
Electronic properties of semiconductor devices are sensitive to defects such as second phase precipitates, grain sizes, and voids. These defects can evolve over time especially under oxidation environments and it is therefore important to understand the resulting aging behavior in order for the reliable applications of devices. In this paper, we propose a kinetic Monte Carlo framework capable of simultaneous simulation of the evolution of second phases, precipitates, grain sizes, and voids in complicated systems involving many species including oxygen. This kinetic Monte Carlo model calculates the energy barriers of various events based directly on the experimental data. As a first step of our model implementation, we incorporate the second phase formation module in the parallel kinetic Monte Carlo codes SPPARKS. Selected aging simulations are performed to examine the formation of second phase precipitates at the eletroplated Au/Bi{sub 2}Te{sub 3} interface under oxygen and oxygen-free environments, and the results are compared with the corresponding experiments.
Hendren, Christine Ogilvie; Badireddy, Appala R; Casman, Elizabeth; Wiesner, Mark R
2013-04-01
Wastewater effluent and sewage sludge are predicted to be important release pathways for nanomaterials used in many consumer products. The uncertainty and variability of potential nanomaterial inputs, nanomaterial properties, and the operation of the wastewater treatment plant contribute to the difficulty of predicting sludge and effluent nanomaterial concentration. With a model parsimony approach, we developed a mass-balance representation of engineered nanomaterial (ENM) behavior based on a minimal number of input variables to describe release quantities to the environment. Our simulations show that significant differences in the removal of silver nanoparticles (nano-Ag) can be expected based on the type of engineered coatings used to stabilize these materials in suspension. At current production estimates, 95% of the estimated effluent concentrations of the nano-Ag considered to be least well-removed by the average wastewater treatment plant are calculated to fall below 0.12 μg/L, while 95% of the estimated sludge concentrations of nano-Ag with coatings that increase their likelihood of being present in biosolids, fall below 0.35 μg/L. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Titt, U; Sahoo, N; Ding, X; Zheng, Y; Newhauser, W D; Zhu, X R; Polf, J C; Gillin, M T; Mohan, R
2008-08-21
In recent years, the Monte Carlo method has been used in a large number of research studies in radiation therapy. For applications such as treatment planning, it is essential to validate the dosimetric accuracy of the Monte Carlo simulations in heterogeneous media. The AAPM Report no 105 addresses issues concerning clinical implementation of Monte Carlo based treatment planning for photon and electron beams, however for proton-therapy planning, such guidance is not yet available. Here we present the results of our validation of the Monte Carlo model of the double scattering system used at our Proton Therapy Center in Houston. In this study, we compared Monte Carlo simulated depth doses and lateral profiles to measured data for a magnitude of beam parameters. We varied simulated proton energies and widths of the spread-out Bragg peaks, and compared them to measurements obtained during the commissioning phase of the Proton Therapy Center in Houston. Of 191 simulated data sets, 189 agreed with measured data sets to within 3% of the maximum dose difference and within 3 mm of the maximum range or penumbra size difference. The two simulated data sets that did not agree with the measured data sets were in the distal falloff of the measured dose distribution, where large dose gradients potentially produce large differences on the basis of minute changes in the beam steering. Hence, the Monte Carlo models of medium- and large-size double scattering proton-therapy nozzles were valid for proton beams in the 100 MeV-250 MeV interval.
Leroy, Bertrand; Uhart, Mathieu; Maire, Pascal; Bourguignon, Laurent
2012-09-01
Fluoroquinolones are widely used in geriatric patients, but elderly patients are known to be at increased risk of decline in renal function. As fluoroquinolones usually exhibit a dominant renal elimination pathway, reduced dosage regimens are often used in geriatric patients. Our objective was to assess the capability to reach a pharmacokinetic-pharmacodynamic target of efficacy with such reduced dosage regimens of ofloxacin, levofloxacin and ciprofloxacin in elderly patients. Using Monte Carlo simulations, 1000 simulated elderly patients were created, based on published pharmacokinetic and pharmacodynamic data, and measured demographic data. Three usually proposed drug regimens taking renal function into account were evaluated using compartmental models. The probability of reaching an fAUC/MIC >100 was calculated for each regimen. For MICs <1 mg/L, all simulated patients reach the efficacy target. However, with higher values of MIC, the proposed regimens were inefficient for patients with moderate or severe renal impairment: 3.4% and 30.2% of patients with moderate renal impairment reached the efficacy target for ciprofloxacin and ofloxacin, respectively, for an MIC of 2 mg/L. For ciprofloxacin, more than 80% of patients with severe renal impairment were unable to reach the target fAUC/MIC with an MIC as low as 1 mg/L, whereas for levofloxacin, all simulated patients reached the efficacy target until an MIC of 4 mg/L. This suggests that the proposed dosage reduction does not allow the same exposure to be achieved in elderly patients with renal impairment, eventually leading to treatment failure or development of resistant strains.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z Y; Kremer, Kurt; Daoulas, Kostas Ch
2016-11-14
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
NASA Astrophysics Data System (ADS)
Greco, Cristina; Jiang, Ying; Chen, Jeff Z. Y.; Kremer, Kurt; Daoulas, Kostas Ch.
2016-11-01
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
Curran, Patrick J; Bollen, Kenneth A; Paxton, Pamela; Kirby, James; Chen, Feinian
2002-01-01
The noncentral chi-square distribution plays a key role in structural equation modeling (SEM). The likelihood ratio test statistic that accompanies virtually all SEMs asymptotically follows a noncentral chi-square under certain assumptions relating to misspecification and multivariate distribution. Many scholars use the noncentral chi-square distribution in the construction of fit indices, such as Steiger and Lind's (1980) Root Mean Square Error of Approximation (RMSEA) or the family of baseline fit indices (e.g., RNI, CFI), and for the computation of statistical power for model hypothesis testing. Despite this wide use, surprisingly little is known about the extent to which the test statistic follows a noncentral chi-square in applied research. Our study examines several hypotheses about the suitability of the noncentral chi-square distribution for the usual SEM test statistic under conditions commonly encountered in practice. We designed Monte Carlo computer simulation experiments to empirically test these research hypotheses. Our experimental conditions included seven sample sizes ranging from 50 to 1000, and three distinct model types, each with five specifications ranging from a correct model to the severely misspecified uncorrelated baseline model. In general, we found that for models with small to moderate misspecification, the noncentral chi-square distribution is well approximated when the sample size is large (e.g., greater than 200), but there was evidence of bias in both mean and variance in smaller samples. A key finding was that the test statistics for the uncorrelated variable baseline model did not follow the noncentral chi-square distribution for any model type across any sample size. We discuss the implications of our findings for the SEM fit indices and power estimation procedures that are based on the noncentral chi-square distribution as well as potential directions for future research.
da Silva, Roberto; Alves, Nelson; Drugowich de Felício, Jose Roberto
2013-01-01
In this work, we study the critical behavior of second-order points, specifically the Lifshitz point (LP) of a three-dimensional Ising model with axial competing interactions [the axial-next-nearest-neighbor Ising (ANNNI) model], using time-dependent Monte Carlo simulations. We use a recently developed technique that helps us localize the critical temperature corresponding to the best power law for magnetization decay over time:
Wan Chan Tseung, H; Ma, J; Beltran, C
2015-06-01
Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically
Patrone, Paul N; Einstein, T L; Margetis, Dionisios
2010-12-01
We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.
NASA Astrophysics Data System (ADS)
Petit, Odile; Jouanne, Cédric; Litaize, Olivier; Serot, Olivier; Chebboubi, Abdelhazize; Pénéliau, Yannick
2017-09-01
TRIPOLI-4® Monte Carlo transport code and FIFRELIN fission model have been coupled by means of external files so that neutron transport can take into account fission distributions (multiplicities and spectra) that are not averaged, as is the case when using evaluated nuclear data libraries. Spectral effects on responses in shielding configurations with fission sampling are then expected. In the present paper, the principle of this coupling is detailed and a comparison between TRIPOLI-4® fission distributions at the emission of fission neutrons is presented when using JEFF-3.1.1 evaluated data or FIFRELIN data generated either through a n/g-uncoupled mode or through a n/g-coupled mode. Finally, an application to a modified version of the ASPIS benchmark is performed and the impact of using FIFRELIN data on neutron transport is analyzed. Differences noticed on average reaction rates on the surfaces closest to the fission source are mainly due to the average prompt fission spectrum. Moreover, when working with the same average spectrum, a complementary analysis based on non-average reaction rates still shows significant differences that point out the real impact of using a fission model in neutron transport simulations.
Geometrical Monte Carlo simulation of atmospheric turbulence
NASA Astrophysics Data System (ADS)
Yuksel, Demet; Yuksel, Heba
2013-09-01
Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These Monte Carlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The Monte Carlo analysis will be compared with the predictions of wave theory.
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Kolmann, Stephen J.; D'Arcy, Jordan H.; Crittenden, Deborah L.; Jordan, Meredith J. T.
2015-11-01
Finite temperature quantum and anharmonic effects are studied in H2-Li+-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li+-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li+-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol-1, respectively.
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
Shimada, M; Yamada, Y; Itoh, M; Yatagai, T
2001-09-01
Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Shimada, M.; Yamada, Y.; Itoh, M.; Yatagai, T.
2001-09-01
Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Fougere, Nicolas; Altwegg, K.; Berthelier, J.-J.; Bieler, A.; Bockelée-Morvan, D.; Calmonte, U.; Capaccioni, F.; Combi, M. R.; De Keyser, J.; Debout, V.; Erard, S.; Fiethe, B.; Filacchione, G.; Fink, U.; Fuselier, S. A.; Gombosi, T. I.; Hansen, K. C.; Hässig, M.; Huang, Z.; Le Roy, L.; Leyrat, C.; Migliorini, A.; Piccioni, G.; Rinaldi, G.; Rubin, M.; Shou, Y.; Tenishev, V.; Toth, G.; Tzou, C.-Y.
2016-11-01
We analyse the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) - the Double Focusing Mass Spectrometer data between 2014 August and 2016 February to examine the effect of seasonal variations on the four major species within the coma of 67P/Churyumov-Gerasimenko (H2O, CO2, CO, and O2), resulting from the tilt in the orientation of the comet's spin axis. Using a numerical data inversion, we derive the non-uniform activity distribution at the surface of the nucleus for these species, suggesting that the activity distribution at the surface of the nucleus has not significantly been changed and that the differences observed in the coma are solely due to the variations in illumination conditions. A three-dimensional Direct Simulation Monte Carlo model is applied where the boundary conditions are computed with a coupling of the surface activity distributions and the local illumination. The model is able to reproduce the evolution of the densities observed by ROSINA including the changes happening at equinox. While O2 stays correlated with H2O as it was before equinox, CO2 and CO, which had a poor correlation with respect to H2O pre-equinox, also became well correlated with H2O post-equinox. The integration of the densities from the model along the line of sight results in column densities directly comparable to the VIRTIS-H observations. Also, the evolution of the volatiles' production rates is derived from the coma model showing a steepening in the production rate curves after equinox. The model/data comparison suggests that the seasonal effects result in the Northern hemisphere of 67P's nucleus being more processed with a layered structure while the Southern hemisphere constantly exposes new material.
NASA Astrophysics Data System (ADS)
Xie, Huamu; Ben-Zvi, Ilan; Rao, Triveni; Xin, Tianmu; Wang, Erdong
2016-10-01
High-average-current, high-brightness electron sources have important applications, such as in high-repetition-rate free-electron lasers, or in the electron cooling of hadrons. Bialkali photocathodes are promising high-quantum-efficiency (QE) cathode materials, while superconducting rf (SRF) electron guns offer continuous-mode operation at high acceleration, as is needed for high-brightness electron sources. Thus, we must have a comprehensive understanding of the performance of bialkali photocathode at cryogenic temperatures when they are to be used in SRF guns. To remove the heat produced by the radio-frequency field in these guns, the cathode should be cooled to cryogenic temperatures. We recorded an 80% reduction of the QE upon cooling the K2CsSb cathode from room temperature down to the temperature of liquid nitrogen in Brookhaven National Laboratory (BNL)'s 704 MHz SRF gun. We conducted several experiments to identify the underlying mechanism in this reduction. The change in the spectral response of the bialkali photocathode, when cooled from room temperature (300 K) to 166 K, suggests that a change in the ionization energy (defined as the energy gap from the top of the valence band to vacuum level) is the main reason for this reduction. We developed an analytical model of the process, based on Spicer's three-step model. The change in ionization energy, with falling temperature, gives a simplified description of the QE's temperature dependence. We also developed a 2D Monte Carlo code to simulate photoemission that accounts for the wavelength-dependent photon absorption in the first step, the scattering and diffusion in the second step, and the momentum conservation in the emission step. From this simulation, we established a correlation between ionization energy and reduction in the QE. The simulation yielded results comparable to those from the analytical model. The simulation offers us additional capabilities such as calculation of the intrinsic emittance
Xie, Huamu; Ben-Zvi, Ilan; Rao, Triveni; ...
2016-10-19
High-average-current, high-brightness electron sources have important applications, such as in high-repetition-rate free-electron lasers, or in the electron cooling of hadrons. Bialkali photocathodes are promising high-quantum-efficiency (QE) cathode materials, while superconducting rf (SRF) electron guns offer continuous-mode operation at high acceleration, as is needed for high-brightness electron sources. Thus, we must have a comprehensive understanding of the performance of bialkali photocathode at cryogenic temperatures when they are to be used in SRF guns. To remove the heat produced by the radio-frequency field in these guns, the cathode should be cooled to cryogenic temperatures.We recorded an 80% reduction of the QE uponmore » cooling the K2CsSb cathode from room temperature down to the temperature of liquid nitrogen in Brookhaven National Laboratory (BNL)’s 704 MHz SRF gun.We conducted several experiments to identify the underlying mechanism in this reduction. The change in the spectral response of the bialkali photocathode, when cooled from room temperature (300 K) to 166 K, suggests that a change in the ionization energy (defined as the energy gap from the top of the valence band to vacuum level) is the main reason for this reduction.We developed an analytical model of the process, based on Spicer’s three-step model. The change in ionization energy, with falling temperature, gives a simplified description of the QE’s temperature dependence.We also developed a 2D Monte Carlo code to simulate photoemission that accounts for the wavelength-dependent photon absorption in the first step, the scattering and diffusion in the second step, and the momentum conservation in the emission step. From this simulation, we established a correlation between ionization energy and reduction in the QE. The simulation yielded results comparable to those from the analytical model. The simulation offers us additional capabilities such as calculation of the intrinsic
Monte Carlo simulation of intercalated carbon nanotubes.
Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter
2007-01-01
Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms.
Monte Carlo simulation of coarsening in a model of submonolayer epitaxial growth
NASA Astrophysics Data System (ADS)
Lam, Pui-Man; Bayayoko, Diola; Hu, Xiao-Yang
1999-06-01
We investigate the effect of coarsening in the Clarke-Vvedensky model of thin film growth, primarily as a model of statistical physics far from equilibrium. We deposit adatoms on the substrate until a fixed coverage is reached. We then stop the deposition and measure the subsequent change in the distribution of the island sizes. We find that for large flux, coarsening in this model is consistent with the Lifshitz-Slyozov law ξ˜ t1/3, where ξ is the characteristic linear dimension and t is the time in the coarsening process. We have also calculated the stationary states of the island size distributions at long times and find that these distribution functions are independent of initial conditions. They obey scaling with the universal scaling function agreeing with that obtained by Kandel using the Smolochowsky equation in a cluster coalescence model.
Monte Carlo Simulation of Endlinking Oligomers
NASA Technical Reports Server (NTRS)
Hinkley, Jeffrey A.; Young, Jennifer A.
1998-01-01
This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander; Ben-Naim, Arieh
2009-11-01
We have carried out Monte Carlo simulation on the primitive one dimensional model for water described earlier [A. Ben-Naim, J. Chem. Phys. 128, 024506 (2008)]. We show that by taking into account second nearest neighbor interactions, one can obtain the characteristic anomalous solvation thermodynamic quantities of inert solutes in water. This model clearly demonstrates the molecular origin of the large negative entropy of solvation of an inert solute in water.
NASA Astrophysics Data System (ADS)
Milchev, Andrey; Binder, Kurt; Bhattacharya, Aniket
2004-09-01
Dynamic Monte Carlo simulation of a bead-spring model of flexible macromolecules threading through a very narrow pore in a very thin rigid membrane are presented, assuming at the cis side of the membrane a purely repulsive monomer-wall interaction, while the trans side is attractive. Two choices of monomer-wall attraction ɛ are considered, one choice is slightly below and the other slightly above the "mushroom to pancake" adsorption threshold ɛc for an infinitely long chain. Studying chain lengths N=32, 64, 128, and 256 and varying the number of monomers Ntrans (time t=0) that have already passed the pore when the simulation started, over a wide range, we find for ɛ<ɛc (nonadsorbing case) that the translocation probability varies proportional to ctrans=Ntrans(t=0)/N for small ctrans, while for ɛ>ɛc a finite number Ntrans(t=0) suffices that the translocation probability is close to unity. In the case ɛ<ɛc, however, the time it takes for those chains to get through the pore to complete the translocation process scales as τ∝N2.23±0.04. This result agrees with the suggestion of Chuang, Kantor, and Kardar [Phys. Rev. E 65, 011802 (2001)] that the translocation time is proportional to the Rouse time, that scales under good solvent condition as τRouse∝N2ν+1, with the excluded-volume exponent ν≈0.59 in d=3 dimensions. Our results hence disagree with the suggestions that the translocation time should scale as either N2 or N3. For ɛ>ɛc, we find that the translocation time scales as τ∝N1.65±0.08. We suggest a tentative scaling explanation for this result. Also the distribution of translocation times is obtained and discussed.
Runov, A.M.; Kasilov, S.V.; Helander, P.
2015-11-01
A kinetic Monte Carlo model suited for self-consistent transport studies is proposed and tested. The Monte Carlo collision operator is based on a widely used model of Coulomb scattering by a drifting Maxwellian and a new algorithm enforcing the momentum and energy conservation laws. The difference to other approaches consists in a specific procedure of calculating the background Maxwellian parameters, which does not require ensemble averaging and, therefore, allows for the use of single-particle algorithms. This possibility is useful in transport balance (steady state) problems with a phenomenological diffusive ansatz for the turbulent transport, because it allows a direct use of variance reduction methods well suited for single particle algorithms. In addition, a method for the self-consistent calculation of the electric field is discussed. Results of testing of the new collision operator using a set of 1D examples, and preliminary results of 2D modelling in realistic tokamak geometry, are presented.
Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses
ALAM,TODD M.
1999-12-21
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
NASA Astrophysics Data System (ADS)
Jalayer, Fatemeh; Ebrahimian, Hossein
2014-05-01
Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the
Monte Carlo Simulation of Counting Experiments.
ERIC Educational Resources Information Center
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
NASA Astrophysics Data System (ADS)
Aziz Hashikin, Nurul Ab; Yeong, Chai-Hong; Guatelli, Susanna; Jeet Abdullah, Basri Johan; Ng, Kwan-Hoong; Malaroda, Alessandra; Rosenfeld, Anatoly; Perkins, Alan Christopher
2017-09-01
We aimed to investigate the validity of the partition model (PM) in estimating the absorbed doses to liver tumour ({{D}T} ), normal liver tissue ({{D}NL} ) and lungs ({{D}L} ), when cross-fire irradiations between these compartments are being considered. MIRD-5 phantom incorporated with various treatment parameters, i.e. tumour involvement (TI), tumour-to-normal liver uptake ratio (T/N) and lung shunting (LS), were simulated using the Geant4 Monte Carlo (MC) toolkit. 108 track histories were generated for each combination of the three parameters to obtain the absorbed dose per activity uptake in each compartment (DT{{AT}} , DNL{{ANL}} , and DL{{AL}} ). The administered activities, A were estimated using PM, so as to achieve either limiting doses to normal liver, DNLlim or lungs, ~DLlim (70 or 30 Gy, respectively). Using these administered activities, the activity uptake in each compartment ({{A}T} , {{A}NL} , and {{A}L} ) was estimated and multiplied with the absorbed dose per activity uptake attained using the MC simulations, to obtain the actual dose received by each compartment. PM overestimated {{D}L} by 11.7% in all cases, due to the escaped particles from the lungs. {{D}T} and {{D}NL} by MC were largely affected by T/N, which were not considered by PM due to cross-fire exclusion at the tumour-normal liver boundary. These have resulted in the overestimation of {{D}T} by up to 8% and underestimation of {{D}NL} by as high as -78%, by PM. When DNLlim was estimated via PM, the MC simulations showed significantly higher {{D}NL} for cases with higher T/N, and LS ⩽ 10%. All {{D}L} and {{D}T} by MC were overestimated by PM, thus DLlim were never exceeded. PM leads to inaccurate dose estimations due to the exclusion of cross-fire irradiation, i.e. between the tumour and normal liver tissue. Caution should be taken for cases with higher TI and T/N, and lower LS, as they contribute to major underestimation of {{D}NL} . For {{D}L} , a different correction
Hashikin, Nurul Ab Aziz; Yeong, Chai-Hong; Guatelli, Susanna; Abdullah, Basri Johan Jeet; Ng, Kwan-Hoong; Malaroda, Alessandra; Rosenfeld, Anatoly; Perkins, Alan Christopher
2017-08-22
We aimed to investigate the validity of the partition model (PM) in estimating the absorbed doses to liver tumour ([Formula: see text]), normal liver tissue ([Formula: see text]) and lungs ([Formula: see text]), when cross-fire irradiations between these compartments are being considered. MIRD-5 phantom incorporated with various treatment parameters, i.e. tumour involvement (TI), tumour-to-normal liver uptake ratio (T/N) and lung shunting (LS), were simulated using the Geant4 Monte Carlo (MC) toolkit. 10(8) track histories were generated for each combination of the three parameters to obtain the absorbed dose per activity uptake in each compartment ([Formula: see text], [Formula: see text], and [Formula: see text]). The administered activities, A were estimated using PM, so as to achieve either limiting doses to normal liver, [Formula: see text] or lungs, [Formula: see text] (70 or 30 Gy, respectively). Using these administered activities, the activity uptake in each compartment ([Formula: see text], [Formula: see text], and [Formula: see text]) was estimated and multiplied with the absorbed dose per activity uptake attained using the MC simulations, to obtain the actual dose received by each compartment. PM overestimated [Formula: see text] by 11.7% in all cases, due to the escaped particles from the lungs. [Formula: see text] and [Formula: see text] by MC were largely affected by T/N, which were not considered by PM due to cross-fire exclusion at the tumour-normal liver boundary. These have resulted in the overestimation of [Formula: see text] by up to 8% and underestimation of [Formula: see text] by as high as -78%, by PM. When [Formula: see text] was estimated via PM, the MC simulations showed significantly higher [Formula: see text] for cases with higher T/N, and LS ⩽ 10%. All [Formula: see text] and [Formula: see text] by MC were overestimated by PM, thus [Formula: see text] were never exceeded. PM leads to inaccurate dose estimations due to
Monte Carlo simulation of chromatin stretching
NASA Astrophysics Data System (ADS)
Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg
2006-04-01
We present Monte Carlo (MC) simulations of the stretching of a single 30nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90° to 130° increases the stiffness significantly. An increase in the opening angle from 22° to 34° leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending.
Mode, Charles J; Gallop, Robert J
2008-02-01
A case has made for the use of Monte Carlo simulation methods when the incorporation of mutation and natural selection into Wright-Fisher gametic sampling models renders then intractable from the standpoint of classical mathematical analysis. The paper has been organized around five themes. Among these themes was that of scientific openness and a clear documentation of the mathematics underlying the software so that the results of any Monte Carlo simulation experiment may be duplicated by any interested investigator in a programming language of his choice. A second theme was the disclosure of the random number generator used in the experiments to provide critical insights as to whether the generated uniform random variables met the criterion of independence satisfactorily. A third theme was that of a review of recent literature in genetics on attempts to find signatures of evolutionary processes such as natural selection, among the millions of segments of DNA in the human genome, that may help guide the search for new drugs to treat diseases. A fourth theme involved formalization of Wright-Fisher processes in a simple form that expedited the writing of software to run Monte Carlo simulation experiments. Also included in this theme was the reporting of several illustrative Monte Carlo simulation experiments for the cases of two and three alleles at some autosomal locus, in which attempts were to made to apply the theory of Wright-Fisher models to gain some understanding as to how evolutionary signatures may have developed in the human genome and those of other diploid species. A fifth theme was centered on recommendations that more demographic factors, such as non-constant population size, be included in future attempts to develop computer models dealing with signatures of evolutionary process in genomes of various species. A brief review of literature on the incorporation of demographic factors into genetic evolutionary models was also included to expedite and
Monte Carlo simulations of Protein Adsorption
NASA Astrophysics Data System (ADS)
Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges
2008-03-01
Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.
Lattice Monte Carlo simulations of polymer melts
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping
2014-12-01
We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.
Monte Carlo simulations of electron photoemission from cesium antimonide
NASA Astrophysics Data System (ADS)
Gupta, Pranav; Cultrera, Luca; Bazarov, Ivan
2017-06-01
We report on the results from semi-classical Monte Carlo simulations of electron photoemission (photoelectric emission) from cesium antimonide (Cs3Sb) and compare them with experimental results at 90 K and room temperature, with an emphasis on near-threshold photoemission properties. Interfacial effects, impurities, and electron-phonon coupling are central features of our Monte Carlo model. We use these simulations to predict photoemission properties at the ultracold cryogenic temperature of 20 K and to identify critical material parameters that need to be properly measured experimentally for reproducing the electron photoemission properties of Cs3Sb and other materials more accurately.
Borges, C.; Zarza-Moreno, M.; Heath, E.; Teixeira, N.; Vaz, P.
2012-01-15
Purpose: The most recent Varian micro multileaf collimator (MLC), the High Definition (HD120) MLC, was modeled using the BEAMNRC Monte Carlo code. This model was incorporated into a Varian medical linear accelerator, for a 6 MV beam, in static and dynamic mode. The model was validated by comparing simulated profiles with measurements. Methods: The Varian Trilogy (2300C/D) accelerator model was accurately implemented using the state-of-the-art Monte Carlo simulation program BEAMNRC and validated against off-axis and depth dose profiles measured using ionization chambers, by adjusting the energy and the full width at half maximum (FWHM) of the initial electron beam. The HD120 MLC was modeled by developing a new BEAMNRC component module (CM), designated HDMLC, adapting the available DYNVMLC CM and incorporating the specific characteristics of this new micro MLC. The leaf dimensions were provided by the manufacturer. The geometry was visualized by tracing particles through the CM and recording their position when a leaf boundary is crossed. The leaf material density and abutting air gap between leaves were adjusted in order to obtain a good agreement between the simulated leakage profiles and EBT2 film measurements performed in a solid water phantom. To validate the HDMLC implementation, additional MLC static patterns were also simulated and compared to additional measurements. Furthermore, the ability to simulate dynamic MLC fields was implemented in the HDMLC CM. The simulation results of these fields were compared with EBT2 film measurements performed in a solid water phantom. Results: Overall, the discrepancies, with and without MLC, between the opened field simulations and the measurements using ionization chambers in a water phantom, for the off-axis profiles are below 2% and in depth-dose profiles are below 2% after the maximum dose depth and below 4% in the build-up region. On the conditions of these simulations, this tungsten-based MLC has a density of 18.7 g
NASA Astrophysics Data System (ADS)
Terzyk, Artur P.; Furmaniak, Sylwester; Gauden, Piotr A.; Harris, Peter J. F.; Włoch, Jerzy
2008-09-01
Using the plausible model of activated carbon proposed by Harris and co-workers and grand canonical Monte Carlo simulations, we study the applicability of standard methods for describing adsorption data on microporous carbons widely used in adsorption science. Two carbon structures are studied, one with a small distribution of micropores in the range up to 1 nm, and the other with micropores covering a wide range of porosity. For both structures, adsorption isotherms of noble gases (from Ne to Xe), carbon tetrachloride and benzene are simulated. The data obtained are considered in terms of Dubinin-Radushkevich plots. Moreover, for benzene and carbon tetrachloride the temperature invariance of the characteristic curve is also studied. We show that using simulated data some empirical relationships obtained from experiment can be successfully recovered. Next we test the applicability of Dubinin's related models including the Dubinin-Izotova, Dubinin-Radushkevich-Stoeckli, and Jaroniec-Choma equations. The results obtained demonstrate the limits and applications of the models studied in the field of carbon porosity characterization.
NASA Astrophysics Data System (ADS)
Cassidy, Jeffrey; Betz, Vaughn; Lilge, Lothar
2015-02-01
Monte Carlo (MC) simulation is recognized as the “gold standard” for biophotonic simulation, capturing all relevant physics and material properties at the perceived cost of high computing demands. Tetrahedral-mesh-based MC simulations particularly are attractive due to the ability to refine the mesh at will to conform to complicated geometries or user-defined resolution requirements. Since no approximations of material or light-source properties are required, MC methods are applicable to the broadest set of biophotonic simulation problems. MC methods also have other implementation features including inherent parallelism, and permit a continuously-variable quality-runtime tradeoff. We demonstrate here a complete MC-based prospective fluence dose evaluation system for interstitial PDT to generate dose-volume histograms on a tetrahedral mesh geometry description. To our knowledge, this is the first such system for general interstitial photodynamic therapy employing MC methods and is therefore applicable to a very broad cross-section of anatomy and material properties. We demonstrate that evaluation of dose-volume histograms is an effective variance-reduction scheme in its own right which greatly reduces the number of packets required and hence runtime required to achieve acceptable result confidence. We conclude that MC methods are feasible for general PDT treatment evaluation and planning, and considerably less costly than widely believed.
Monte Carlo simulation of Alaska wolf survival
NASA Astrophysics Data System (ADS)
Feingold, S. J.
1996-02-01
Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.
Monte Carlo Simulation of Plumes Spectral Emission
2005-06-07
Table 1 Calculatio n series phN , %ε Figure 1 105 11.0 2 2 106 4.9 3 3 107 0.6 5 The relative error ε was calculated with respect to the mean...is presented in Table 2. Table 2 Monte-Carlo Simulation of Plumes Spectral Emission 19 Calculatio n series phN , %ε Figure 1 5×103 0.475 6
Monte Carlo simulation of Touschek effect.
Xiao, A.; Borland, M.; Accelerator Systems Division
2010-07-30
We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.
Bernal, M A; Bordage, M C; Brown, J M C; Davídková, M; Delage, E; El Bitar, Z; Enger, S A; Francis, Z; Guatelli, S; Ivanchenko, V N; Karamitros, M; Kyriakou, I; Maigne, L; Meylan, S; Murakami, K; Okada, S; Payno, H; Perrot, Y; Petrovic, I; Pham, Q T; Ristic-Fira, A; Sasaki, T; Štěpán, V; Tran, H N; Villagrasa, C; Incerti, S
2015-12-01
Understanding the fundamental mechanisms involved in the induction of biological damage by ionizing radiation remains a major challenge of today's radiobiology research. The Monte Carlo simulation of physical, physicochemical and chemical processes involved may provide a powerful tool for the simulation of early damage induction. The Geant4-DNA extension of the general purpose Monte Carlo Geant4 simulation toolkit aims to provide the scientific community with an open source access platform for the mechanistic simulation of such early damage. This paper presents the most recent review of the Geant4-DNA extension, as available to Geant4 users since June 2015 (release 10.2 Beta). In particular, the review includes the description of new physical models for the description of electron elastic and inelastic interactions in liquid water, as well as new examples dedicated to the simulation of physicochemical and chemical stages of water radiolysis. Several implementations of geometrical models of biological targets are presented as well, and the list of Geant4-DNA examples is described.
Numerical reproducibility for implicit Monte Carlo simulations
Cleveland, M.; Brunner, T.; Gentile, N.
2013-07-01
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. In [1], a way of eliminating this roundoff error using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. A non-arbitrary precision approaches required a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step. (authors)
Search and Rescue Monte Carlo Simulation.
1985-03-01
confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)
NASA Astrophysics Data System (ADS)
Zaim, Ahmed; Kerouad, Mohamed
2010-09-01
A Monte Carlo simulation has been used to study the magnetic properties and the critical behaviors of a single spherical nanoparticle, consisting of a ferromagnetic core of σ=±{1}/{2} spins surrounded by a ferromagnetic shell of S=±1, 0 or S=±{1}/{2}, ±{3}/{2} spins with antiferromagnetic interface coupling, located on a simple cubic lattice. A number of characteristic phenomena has been found. In particular, the effects of the shell coupling and the interface coupling on both the critical and compensation temperatures are investigated. We have found that, for appropriate values of the system parameters, two compensation temperatures may occur in the present system.
NASA Astrophysics Data System (ADS)
Binder, Kurt; Müller, M.; Milchev, A.; Landau, D. P.
2005-07-01
Two-phase coexistence in systems with free surfaces is enforced by boundary fields requiring the presence of an interface. Varying the temperature or the surface field, one can observe new types of phase transitions where the interface essentially disappears (it becomes bound to a wall or a wedge or a corner of the system). These transitions are simulated with Monte Carlo for Ising ferromagnets and polymer blends, applying finite size scaling analysis. Anisotropic critical fluctuations may occur, and in the limit where the system becomes macroscopically large in all three directions the order parameter vanishes discontinuously (either because its exponent β=0, or its critical amplitude diverges). Since interfacial fluctuations are slow and large systems are needed (e.g., lattices up to 80×80×442 sites in the double wedge case), significant computer resources are necessary for a meaningful accuracy.
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-07
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm(-3) and 1.1 g cm(-3) occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
NASA Astrophysics Data System (ADS)
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm-3 and 1.1 g cm-3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; DeMarco, John J.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain
Monte Carlo simulation for the transport beamline
NASA Astrophysics Data System (ADS)
Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.
2013-07-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Lin, J. Y. Y.; Aczel, Adam A; Abernathy, Douglas L; Nagler, Stephen E; Buyers, W. J. L.; Granroth, Garrett E
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
NASA Astrophysics Data System (ADS)
Lin, J. Y. Y.; Aczel, A. A.; Abernathy, D. L.; Nagler, S. E.; Buyers, W. J. L.; Granroth, G. E.
2014-04-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of-flight chopper spectrometers [A. A. Aczel et al., Nat. Commun. 3, 1124 (2012), 10.1038/ncomms2117]. These modes are well described by three-dimensional isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states, and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature-dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Bhuiyan, Lutful Bari; Lamperski, Stanisław; Wu, Jianzhong; Henderson, Douglas
2012-08-30
Theoretical difficulties in describing the structure and thermodynamics of an ionic liquid double layer are often associated with the nonspherical shapes of ionic particles and extremely strong electrostatic interactions. The recent density functional theory predictions for the electrochemical properties of the double layer formed by a model ionic liquid wherein each cation is represented by two touching hard spheres, one positively charged and the other neutral, and each anion by a negatively charged hard spherical particle, remain untested in this strong coupling regime. We report results from a Monte Carlo simulation of this system. Because for an ionic liquid the Bjerrum length is exceedingly large, it is difficult to perform simulations under conditions of strong electrostatic coupling used in the previous density functional theory study. Results are obtained for a somewhat smaller (but still large) Bjerrum length so that reliable simulation data can be generated for a useful test of the corresponding theoretical predictions. On the whole, the density profiles predicted by the theory are quite good in comparison with the simulation data. The strong oscillations of ionic density profiles and the local electrostatic potential predicted by this theory are confirmed by simulation, although for a small electrode charge and strong electrostatic coupling, the theory predicts the contact ionic densities to be noticeably different from the Monte Carlo results. The theoretical results for the more important electrostatic potential profile at contact are given with good accuracy.
Automated Monte Carlo Simulation of Proton Therapy Treatment Plans.
Verburg, Joost Mathijs; Grassberger, Clemens; Dowdell, Stephen; Schuemann, Jan; Seco, Joao; Paganetti, Harald
2016-12-01
Simulations of clinical proton radiotherapy treatment plans using general purpose Monte Carlo codes have been proven to be a valuable tool for basic research and clinical studies. They have been used to benchmark dose calculation methods, to study radiobiological effects, and to develop new technologies such as in vivo range verification methods. Advancements in the availability of computational power have made it feasible to perform such simulations on large sets of patient data, resulting in a need for automated and consistent simulations. A framework called MCAUTO was developed for this purpose. Both passive scattering and pencil beam scanning delivery are supported. The code handles the data exchange between the treatment planning system and the Monte Carlo system, which requires not only transfer of plan and imaging information but also translation of institutional procedures, such as output factor definitions. Simulations are performed on a high-performance computing infrastructure. The simulation methods were designed to use the full capabilities of Monte Carlo physics models, while also ensuring consistency in the approximations that are common to both pencil beam and Monte Carlo dose calculations. Although some methods need to be tailored to institutional planning systems and procedures, the described procedures show a general road map that can be easily translated to other systems.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
NASA Astrophysics Data System (ADS)
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
NASA Astrophysics Data System (ADS)
Cho, Gyu-Seok; Kim, Kum-Bae; Choi, Sang-Hyoun; Song, Yong-Keun; Lee, Soon-Sung
2017-01-01
Recently, Monte Carlo methods have been used to optimize the design and modeling of radiation detectors. However, most Monte Carlo codes have a fixed and simple optical physics, and the effect of the signal readout devices is not considered because of the limitations of the geometry function. Therefore, the disadvantages of the codes prevent the modeling of the scintillator detector. The modeling of a comprehensive and extensive detector system has been reported to be feasible when the optical physics model of the GEomerty ANd Tracking 4 (GEANT 4) simulation code is used. In this study, we performed a Gd2O3:Eu scintillator modelling by using the GEANT4 simulation code and compared the results with the measurement data. To obtain the measurement data for the scintillator, we synthesized the Gd2O3:Eu scintillator by using solution combustion method and we evaluated the characteristics of the scintillator by using X-ray diffraction and photoluminescence. We imported the measured data into the GEANT4 code because GEANT4 cannot simulate a fluorescence phenomenon. The imported data were used as an energy distribution for optical photon generation based on the energy deposited in the scintillator. As a result of the simulation, a strong emission peak consistent with the measured data was observed at 611 nm, and the overall trends of the spectrum agreed with the measured data. This result is significant because the characteristics of the scintillator are equally implemented in the simulation, indicating a valuable improvement in the modeling of scintillator-based radiation detectors.
NASA Astrophysics Data System (ADS)
Feng, Sheng; Fang, Ye; Tam, Ka-Ming; Thakur, Bhupender; Yun, Zhifeng; Tomko, Karen; Moreno, Juana; Ramanujam, Jagannathan; Jarrell, Mark
2013-03-01
The Edwards Anderson model is a typical example of random frustrated system. It has been a long standing problem in computational physics due to its long relaxation time. Some important properties of the low temperature spin glass phase are still poorly understood after decades of study. The recent advances of GPU computing provide a new opportunity to substantially improve the simulations. We developed an MPI-CUDA hybrid code with multi-spin coding for parallel tempering Monte Carlo simulation of Edwards Anderson model. Since the system size is relatively small, and a large number of parallel replicas and Monte Carlo moves are required, the problem suits well for modern GPUs with CUDA architecture. We use the code to perform an extensive simulation on the three-dimensional Edwards Anderson model with an external field. This work is funded by the NSF EPSCoR LA-SiGMA project under award number EPS-1003897. This work is partly done on the machines of Ohio Supercomputer Center.
Monte-Carlo simulation of Callisto's exosphere
NASA Astrophysics Data System (ADS)
Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.
2015-12-01
We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.
Monte Carlo simulation of laser attenuation characteristics in fog
NASA Astrophysics Data System (ADS)
Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi
2011-06-01
Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.
Chang, Qiang; Herbst, Eric
2014-06-01
We have designed an improved algorithm that enables us to simulate the chemistry of cold dense interstellar clouds with a full gas-grain reaction network. The chemistry is treated by a unified microscopic-macroscopic Monte Carlo approach that includes photon penetration and bulk diffusion. To determine the significance of these two processes, we simulate the chemistry with three different models. In Model 1, we use an exponential treatment to follow how photons penetrate and photodissociate ice species throughout the grain mantle. Moreover, the products of photodissociation are allowed to diffuse via bulk diffusion and react within the ice mantle. Model 2 is similar to Model 1 but with a slower bulk diffusion rate. A reference Model 0, which only allows photodissociation reactions to occur on the top two layers, is also simulated. Photodesorption is assumed to occur from the top two layers in all three models. We found that the abundances of major stable species in grain mantles do not differ much among these three models, and the results of our simulation for the abundances of these species agree well with observations. Likewise, the abundances of gas-phase species in the three models do not vary. However, the abundances of radicals in grain mantles can differ by up to two orders of magnitude depending upon the degree of photon penetration and the bulk diffusion of photodissociation products. We also found that complex molecules can be formed at temperatures as low as 10 K in all three models.
Monte Carlo simulation of the short-time behaviour of the dynamic XY-model
NASA Astrophysics Data System (ADS)
Okano, K.; Schülke, L.; Yamagishi, K.; Zheng, B.
1997-07-01
Dynamic relaxation of the XY-model quenched from a high temperature state to the critical temperature or below is investigated with Monte Carlo methods. When a non-zero initial magnetization is given, in the short-time regime of the dynamic evolution the critical initial increase of the magnetization is observed. The dynamic exponent 0305-4470/30/13/009/img7 is directly determined. The results show that the exponent 0305-4470/30/13/009/img7 varies with respect to the temperature. Furthermore, it is demonstrated that this initial increase of the magnetization is universal, i.e. independent of the microscopic details of the initial configurations and the algorithms.
Maier, Thomas A; Alvarez, Gonzalo; Summers, Michael Stuart; Schulthess, Thomas C
2010-01-01
Using dynamic cluster quantum Monte Carlo simulations, we study the superconducting behavior of a 1=8 doped two-dimensional Hubbard model with imposed unidirectional stripelike charge-density-wave modulation. We find a significant increase of the pairing correlations and critical temperature relative to the homogeneous system when the modulation length scale is sufficiently large. With a separable form of the irreducible particle-particle vertex, we show that optimized superconductivity is obtained for a moderate modulation strength due to a delicate balance between the modulation enhanced pairing interaction, and a concomitant suppression of the bare particle-particle excitations by a modulation reduction of the quasiparticle weight.
NASA Astrophysics Data System (ADS)
Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.
2015-06-01
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall
Coherent Scattering Imaging Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Hassan, Laila Abdulgalil Rafik
Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness
Monte Carlo simulations of medical imaging modalities
Estes, G.P.
1998-09-01
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.
Monte-Carlo Simulation Balancing in Practice
NASA Astrophysics Data System (ADS)
Huang, Shih-Chieh; Coulom, Rémi; Lin, Shun-Shii
Simulation balancing is a new technique to tune parameters of a playout policy for a Monte-Carlo game-playing program. So far, this algorithm had only been tested in a very artificial setting: it was limited to 5×5 and 6×6 Go, and required a stronger external program that served as a supervisor. In this paper, the effectiveness of simulation balancing is demonstrated in a more realistic setting. A state-of-the-art program, Erica, learned an improved playout policy on the 9×9 board, without requiring any external expert to provide position evaluations. The evaluations were collected by letting the program analyze positions by itself. The previous version of Erica learned pattern weights with the minorization-maximization algorithm. Thanks to simulation balancing, its playing strength was improved from a winning rate of 69% to 78% against Fuego 0.4.
Zhang, Minhua; Chen, Lihang; Yang, Huaming; Sha, Xijiang; Ma, Jing
2016-07-01
Gibbs ensemble Monte Carlo simulation with configurational bias was employed to study the vapor-liquid equilibrium (VLE) for pure acetic acid and for a mixture of acetic acid and ethylene. An improved united-atom force field for acetic acid based on a Lennard-Jones functional form was proposed. The Lennard-Jones well depth and size parameters for the carboxyl oxygen and hydroxyl oxygen were determined by fitting the interaction energies of acetic acid dimers to the Lennard-Jones potential function. Four different acetic acid dimers and the proportions of them were considered when the force field was optimized. It was found that the new optimized force field provides a reasonable description of the vapor-liquid phase equilibrium for pure acetic acid and for the mixture of acetic acid and ethylene. Accurate values were obtained for the saturated liquid density of the pure compound (average deviation: 0.84 %) and for the critical points. The new optimized force field demonstrated greater accuracy and reliability in calculations of the solubility of the mixture of acetic acid and ethylene as compared with the results obtained with the original TraPPE-UA force field.
Gupta, Ankush; Aldinger, Brandon S; Faggin, Marc F; Hines, Melissa A
2010-07-28
An atomistic, chemically realistic, kinetic Monte Carlo simulator of anisotropic Si(100) etching was developed. Surface silicon atoms were classified on the basis of their local structure, and all atoms of each class were etched with the same rate. A wide variety of morphologies, including rough, striped, and hillocked, was observed. General reactivity trends were correlated with specific morphological features. The production of long rows of unstrained dihydride species, recently observed in NH(4)F (aq) etching of Si(100), could only be explained by the rapid etching of dihydrides that are adjacent to (strained) monohydrides-so-called "alpha-dihydrides." Some etch kinetics promoted the formation of {111}-microfaceted pyramidal hillocks, similar in structure to those observed experimentally during Si(100) etching. Pyramid formation was intrinsic to the etch kinetics. In contrast with previously postulated mechanisms of pyramid formation, no masking agent (e.g., impurity, gas bubble) was required. Pyramid formation was explained in terms of the slow etch rate of the {111} sides, {110} edges, and the dihydride species that terminated the apex of the pyramid. As a result, slow etching of Si(111) surfaces was a necessary, but insufficient, criterion for microfacet formation on Si(100) surfaces.
Xu Donghua; Wirth, Brian D.; Li Meimei; Kirk, Marquis A.
2012-09-03
Understanding materials degradation under intense irradiation is important for the development of next generation nuclear power plants. Here we demonstrate that defect microstructural evolution in molybdenum nanofoils in situ irradiated and observed on a transmission electron microscope can be reproduced with high fidelity using an object kinetic Monte Carlo (OKMC) simulation technique. Main characteristics of defect evolution predicted by OKMC, namely, defect density and size distribution as functions of foil thickness, ion fluence and flux, are in excellent agreement with those obtained from the in situ experiments and from previous continuum-based cluster dynamics modeling. The combination of advanced in situ experiments and high performance computer simulation/modeling is a unique tool to validate physical assumptions/mechanisms regarding materials response to irradiation, and to achieve the predictive power for materials stability and safety in nuclear facilities.
Monte Carlo simulation of proton track structure in biological matter
Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.; ...
2017-05-25
Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less
Pod generated by Monte Carlo simulation using a meta-model based on the simSUNDT software
NASA Astrophysics Data System (ADS)
Persson, G.; Hammersberg, P.; Wirdelius, H.
2012-05-01
A recent developed numerical procedure for simulation of POD is used to identify the most influential parameters and test the effect of their interaction and variability with different statistical distributions. With a multi-parameter prediction model, based on the NDT simulation software simSUNDT, a qualified ultrasonic procedure of personnel within Swedish nuclear power plants is investigated. The stochastical computations are compared to experimentally based POD and conclusions are drawn for both fatigue and stress corrosion cracks.
NASA Astrophysics Data System (ADS)
WöHling, Thomas; Vrugt, Jasper A.
2011-04-01
In the past two decades significant progress has been made toward the application of inverse modeling to estimate the water retention and hydraulic conductivity functions of the vadose zone at different spatial scales. Many of these contributions have focused on estimating only a few soil hydraulic parameters, without recourse to appropriately capturing and addressing spatial variability. The assumption of a homogeneous medium significantly simplifies the complexity of the resulting inverse problem, allowing the use of classical parameter estimation algorithms. Here we present an inverse modeling study with a high degree of vertical complexity that involves calibration of a 25 parameter Richards'-based HYDRUS-1D model using in situ measurements of volumetric water content and pressure head from multiple depths in a heterogeneous vadose zone in New Zealand. We first determine the trade-off in the fitting of both data types using the AMALGAM multiple objective evolutionary search algorithm. Then we adopt a Bayesian framework and derive posterior probability density functions of parameter and model predictive uncertainty using the recently developed differential evolution adaptive metropolis, DREAMZS adaptive Markov chain Monte Carlo scheme. We use four different formulations of the likelihood function each differing in their underlying assumption about the statistical properties of the error residual and data used for calibration. We show that AMALGAM and DREAMZS can solve for the 25 hydraulic parameters describing the water retention and hydraulic conductivity functions of the multilayer heterogeneous vadose zone. Our study clearly highlights that multiple data types are simultaneously required in the likelihood function to result in an accurate soil hydraulic characterization of the vadose zone of interest. Remaining error residuals are most likely caused by model deficiencies that are not encapsulated by the multilayer model and can not be accessed by the
Parallel Monte Carlo simulation of multilattice thin film growth
NASA Astrophysics Data System (ADS)
Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen
2001-07-01
This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.
Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.
Leigh, Jessica W; Bryant, David
2015-09-01
Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.
A tetrahedron-based inhomogeneous Monte Carlo optical simulator.
Shen, H; Wang, G
2010-02-21
Optical imaging has been widely applied in preclinical and clinical applications. Fifteen years ago, an efficient Monte Carlo program 'MCML' was developed for use with multi-layered turbid media and has gained popularity in the field of biophotonics. Currently, there is an increasingly pressing need for simulating tools more powerful than MCML in order to study light propagation phenomena in complex inhomogeneous objects, such as the mouse. Here we report a tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIM-OS) to address this issue. By modeling an object as a tetrahedron-based inhomogeneous finite-element mesh, TIM-OS can determine the photon-triangle interaction recursively and rapidly. In numerical simulation, we have demonstrated the correctness and efficiency of TIM-OS.
A tetrahedron-based inhomogeneous Monte Carlo optical simulator
Shen, H; Wang, G
2010-01-01
Optical imaging has been widely applied in preclinical and clinical applications. Fifteen years ago, an efficient Monte Carlo program ‘MCML’ was developed for use with multi-layered turbid media and has gained popularity in the field of biophotonics. Currently, there is an increasingly pressing need for simulating tools more powerful than MCML in order to study light propagation phenomena in complex inhomogeneous objects, such as the mouse. Here we report a tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIM-OS) to address this issue. By modeling an object as a tetrahedron-based inhomogeneous finite-element mesh, TIM-OS can determine the photon– triangle interaction recursively and rapidly. In numerical simulation, we have demonstrated the correctness and efficiency of TIM-OS. PMID:20090182
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
ERIC Educational Resources Information Center
Hannan, Peter J.; Murray, David M.
1996-01-01
A Monte Carlo study compared performance of linear and logistic mixed-model analyses of simulated community trials having specific event rates, intraclass correlations, and degrees of freedom. Results indicate that in studies with adequate denominator degrees of freedom, the researcher may use either method of analysis, with certain cautions. (SLD)
ERIC Educational Resources Information Center
Hannan, Peter J.; Murray, David M.
1996-01-01
A Monte Carlo study compared performance of linear and logistic mixed-model analyses of simulated community trials having specific event rates, intraclass correlations, and degrees of freedom. Results indicate that in studies with adequate denominator degrees of freedom, the researcher may use either method of analysis, with certain cautions. (SLD)
NASA Astrophysics Data System (ADS)
Eising, G.; Kooi, B. J.
2012-06-01
Growth and decay of clusters at temperatures below Tc have been studied for a two-dimensional Ising model for both square and triangular lattices using Monte Carlo (MC) simulations and the enumeration of lattice animals. For the lattice animals, all unique cluster configurations with their internal bonds were identified up to 25 spins for the triangular lattice and up to 29 spins for the square lattice. From these configurations, the critical cluster sizes for nucleation have been determined based on two (thermodynamic) definitions. From the Monte Carlo simulations, the critical cluster size is also obtained by studying the decay and growth of inserted, most compact clusters of different sizes. A good agreement is found between the results from the MC simulations and one of the definitions of critical size used for the lattice animals at temperatures T > ˜0.4 Tc for the square lattice and T > ˜0.2 Tc for the triangular lattice (for the range of external fields H considered). At low temperatures (T ≈ 0.2 Tc for the square lattice and T ≈ 0.1 Tc for the triangular lattice), magic numbers are found in the size distributions during the MC simulations. However, these numbers are not present in the critical cluster sizes based on the MC simulations, as they are present for the lattice animal data. In order to achieve these magic numbers in the critical cluster sizes based on the MC simulation, the temperature has to be reduced further to T ≈ 0.15 Tc for the square lattice. The observed evolution of magic numbers as a function of temperature is rationalized in the present work.
Buyukada, Musa
2017-02-01
The aim of present study is to investigate the thermogravimetric behaviour of the co-combustion of hazelnut hull (HH) and coal blends using three approaches: multi non-linear regression (MNLR) modeling based on Box-Behnken design (BBD) (1), optimization based on response surface methodology (RSM) (2), and probabilistic uncertainty analysis based on Monte Carlo simulation as a function of blend ratio, heating rate, and temperature (3). The response variable was predicted by the best-fit MNLR model with a predicted regression coefficient (R(2)pred) of 99.5%. Blend ratio of 90/10 (HH to coal, %wt), temperature of 405°C, and heating rate of 44°Cmin(-1) were determined as RSM-optimized conditions with a mass loss of 87.4%. The validation experiments with three replications were performed for justifying the predicted-mass loss percentage and 87.5%±0.2 of mass loss were obtained under RSM-optimized conditions. The probabilistic uncertainty analysis were performed by using Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Ono, Kiminori; Matsukawa, Yoshiya; Saito, Yasuhiro; Matsushita, Yohsuke; Aoki, Hideyuki; Era, Koki; Aoki, Takayuki; Yamaguchi, Togo
2015-06-01
This study presents the validity and ability of an aggregate mean free path cluster-cluster aggregation (AMP-CCA) model, which is a direct Monte Carlo simulation, to predict the aggregate morphology with diameters form about 15-200 nm by comparing the particle size distributions (PSDs) with the results of the previous stochastic approach. The PSDs calculated by the AMP-CCA model with the calculated aggregate as a coalesced spherical particle are in reasonable agreement with the results of the previous stochastic model regardless of the initial number concentration of particles. The shape analysis using two methods, perimeter fractal dimension and the shape categories, has demonstrated that the aggregate structures become complex with increasing the initial number concentration of particles. The AMP-CCA model provides a useful tool to calculate the aggregate morphology and PSD with reasonable accuracy.
NASA Astrophysics Data System (ADS)
Gruziel, Magdalena; Rudnicki, Witold R.; Lesyng, Bogdan
2008-02-01
In this study, the hydration of a model Lennard-Jones solute particle and the analytical approximations of the free energy of hydration as functions of solute microscopic parameters are analyzed. The control parameters of the solute particle are the charge, the Lennard-Jones diameter, and also the potential well depth. The obtained multivariate free energy functions of hydration were parametrized based on Metropolis Monte Carlo simulations in the extended NpT ensemble, and interpreted based on mesoscopic solvation models proposed by Gallicchio and Levy [J. Comput. Chem. 25, 479 (2004)], and Wagoner and Baker [Proc. Natl. Acad. Sci. U.S.A. 103, 8331 (2006)]. Regarding the charge and the solute diameter, the dependence of the free energy on these parameters is in qualitative agreement with former studies. The role of the third parameter, the potential well depth not previously considered, appeared to be significant for sufficiently precise bivariate solvation free energy fits. The free energy fits for cations and neutral solute particles were merged, resulting in a compact manifold of the free energy of solvation. The free energy of hydration for anions forms two separate manifolds, which most likely results from an abrupt change of the coordination number when changing the size of the anion particle.
NASA Astrophysics Data System (ADS)
Ševecek, Pavel; Broz, Miroslav; Nesvorny, David; Durda, Daniel D.; Asphaug, Erik; Walsh, Kevin J.; Richardson, Derek C.
2016-10-01
Detailed models of asteroid collisions can yield important constrains for the evolution of the Main Asteroid Belt, but the respective parameter space is large and often unexplored. We thus performed a new set of simulations of asteroidal breakups, i.e. fragmentations of intact targets, subsequent gravitational reaccumulation and formation of small asteroid families, focusing on parent bodies with diameters D = 10 km.Simulations were performed with a smoothed-particle hydrodynamics (SPH) code (Benz & Asphaug 1994), combined with an efficient N-body integrator (Richardson et al. 2000). We assumed a number of projectile sizes, impact velocities and impact angles. The rheology used in the physical model does not include friction nor crushing; this allows for a direct comparison to results of Durda et al. (2007). Resulting size-frequency distributions are significantly different from scaled-down simulations with D = 100 km monolithic targets, although they may be even more different for pre-shattered targets.We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions in N-body simulations of small asteroid families. Finally, we discuss various uncertainties related to SPH simulations.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
NASA Astrophysics Data System (ADS)
Madurga, Sergio; Rey-Castro, Carlos; Pastor, Isabel; Vilaseca, Eudald; David, Calin; Garcés, Josep Lluís; Puy, Jaume; Mas, Francesc
2011-11-01
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups.
NASA Astrophysics Data System (ADS)
Kawaki, Keima; Kuno, Yoshihito; Ichinose, Ikuo
2017-05-01
In this paper, we study phase diagrams of the extended Bose-Hubbard model (EBHM) in one dimension by means of the quantum Monte-Carlo (QMC) simulation using the stochastic-series expansion (SSE). In the EBHM, there exists a nearest-neighbor repulsion as well as the on-site repulsion. In the SSE-QMC simulation, the highest particle number at each site, nc, is also a controllable parameter, and we found that the phase diagrams depend on the value of nc. It is shown that in addition to the Mott insulator, superfluid, density wave, the phase so-called Haldane insulator, and supersolid appear in the phase diagrams, and their locations in the phase diagrams are clarified.
NASA Astrophysics Data System (ADS)
Saeki, Akinori; Kozawa, Takahiro; Tagawa, Seiichi; Cao, Heidi B.; Deng, Hai; Leeson, Michael J.
2008-01-01
Line edge roughness (LER) of patterned features in chemically amplified (CA) resists is formed in the acid generation stage and expected to be moderated by the acid diffusion and development process. It is essential to obtain information on the limit of LER in order to realize next-generation lithographies such as electron beam or extreme ultraviolet. Here, we report for the first time a process simulator based on physical and chemical reaction mechanisms. The LER of a positive-tone CA resist after development is investigated by Monte Carlo simulation and Mack's dissolution model. We found that the LER (high frequency) of less than 1.2 nm is achievable, although the process conditions and material design for achieving such a small LER are strict.
Progress on coupling UEDGE and Monte-Carlo simulation codes
Rensink, M.E.; Rognlien, T.D.
1996-08-28
Our objective is to develop an accurate self-consistent model for plasma and neutral sin the edge of tokamak devices such as DIII-D and ITER. The tow-dimensional fluid model in the UEDGE code has been used successfully for simulating a wide range of experimental plasma conditions. However, when the neutral mean free path exceeds the gradient scale length of the background plasma, the validity of the diffusive and inertial fluid models in UEDGE is questionable. In the long mean free path regime, neutrals can be accurately and efficiently described by a Monte Carlo neutrals model. Coupling of the fluid plasma model in UEDGE with a Monte Carlo neutrals model should improve the accuracy of our edge plasma simulations. The results described here used the EIRENE Monte Carlo neutrals code, but since information is passed to and from the UEDGE plasma code via formatted test files, any similar neutrals code such as DEGAS2 or NIMBUS could, in principle, be used.
DeMarco, J J; Cagnon, C H; Cody, D D; Stevens, D M; McCollough, C H; Zankl, M; Angel, E; McNitt-Gray, M F
2007-05-07
The purpose of this work is to examine the effects of patient size on radiation dose from CT scans. To perform these investigations, we used Monte Carlo simulation methods with detailed models of both patients and multidetector computed tomography (MDCT) scanners. A family of three-dimensional, voxelized patient models previously developed and validated by the GSF was implemented as input files using the Monte Carlo code MCNPX. These patient models represent a range of patient sizes and ages (8 weeks to 48 years) and have all radiosensitive organs previously identified and segmented, allowing the estimation of dose to any individual organ and calculation of patient effective dose. To estimate radiation dose, every voxel in each patient model was assigned both a specific organ index number and an elemental composition and mass density. Simulated CT scans of each voxelized patient model were performed using a previously developed MDCT source model that includes scanner specific spectra, including bowtie filter, scanner geometry and helical source path. The scan simulations in this work include a whole-body scan protocol and a thoracic CT scan protocol, each performed with fixed tube current. The whole-body scan simulation yielded a predictable decrease in effective dose as a function of increasing patient weight. Results from analysis of individual organs demonstrated similar trends, but with some individual variations. A comparison with a conventional dose estimation method using the ImPACT spreadsheet yielded an effective dose of 0.14 mSv mAs(-1) for the whole-body scan. This result is lower than the simulations on the voxelized model designated 'Irene' (0.15 mSv mAs(-1)) and higher than the models 'Donna' and 'Golem' (0.12 mSv mAs(-1)). For the thoracic scan protocol, the ImPACT spreadsheet estimates an effective dose of 0.037 mSv mAs(-1), which falls between the calculated values for Irene (0.042 mSv mAs(-1)) and Donna (0.031 mSv mAs(-1)) and is higher relative
NASA Astrophysics Data System (ADS)
Canti, Lorenzo; Fraccarollo, Alberto; Gatti, Giorgio; Errahali, Mina; Marchese, Leonardo; Cossi, Maurizio
2017-04-01
A combination of physisorption measurements and theoretical simulations was used to derive a plausible model for an amorphous nanoporous material, prepared by Friedel-Crafts alkylation of tetraphenylethene (TPM), leading to a crosslinked polymer of TPM connected by methylene bridges. The model was refined with a trial-and-error procedure, by comparing the experimental and simulated gas adsorption isotherms, which were analysed by QSDFT approach to obtain the details of the porous structure. The adsorption of both nitrogen at 77 K and CO2 at 273 K was considered, the latter to describe the narrowest pores with greater accuracy. The best model was selected in order to reproduce the pore size distribution of the real material over a wide range of pore diameters, from 5 to 80 Å. The model was then verified by simulating the adsorption of methane and carbon dioxide, obtaining a satisfactory agreement with the experimental uptakes. The resulting model can be fruitfully used to predict the adsorption isotherms of various gases, and the effect of chemical functionalizations or other post-synthesis treatments.
Papadimitroulas, P; Kagadis, GC; Loudos, G
2014-06-15
Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and
NASA Astrophysics Data System (ADS)
Chen, Dongsheng; Zeng, Nan; Liu, Celong; Ma, Hui
2012-12-01
In this paper, we present a new method to simulate the signal of polarization-sensitive optical coherence tomography (for short, PS-OCT) by the use of the sphere cylinder birefringence Monte Carlo program developed by our laboratory. Using the program, we can simulate various turbid media based on different optical models and analyze the scattering and polarization information of the simulated media. The detecting area and angle range and the scattering times of the photons are three main conditions we can use to screen out the photons which contribute to the signal of PS-OCT, and in this paper, we study the effects of these three factors on simulation results using our program, and find that the scattering times of the photon is the main factor to affect the signal, and the detecting area and angle range are less important but necessary conditions. In order to test and verify the feasibility of our simulation, we use two methods as a reference. One is called Extended Huygens Fresnel (for short, EHF) method, which is based on electromagnetism theory and can describe the single scattering and multiple scattering of light. By comparison of the results obtained from EHF method and ours, we explore the screening regularities of the photons in the simulation. Meanwhile, we also compare our simulation with another polarization related simulation presented by a Russian group, and our experimental results. Both the comparisons find that our simulation is efficient for PS-OCT at the superficial depth range, and should be further corrected in order to simulate the signal of PS-OCT at deeper depth.
Mohammadyari, P; Faghihi, R; Shirazi, M Mosleh; Lotfi, M; Meigooni, A
2014-06-01
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7±5.7 and 7.7±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry
Mile, Viktória; Gereben, Orsolya; Kohara, Shinji; Pusztai, László
2012-08-16
A detailed study of the microscopic structure of two electrolyte solutions, cesium fluoride (CsF) and cesium iodide (CsI) in water, is presented. For revealing the influence of salt concentration on the structure, CsF solutions at concentrations of 15.1 and 32.3 mol % and CsI solutions at concentrations of 1.0 and 3.9 mol % are investigated. For each concentration, we combine total scattering structure factors from neutron and X-ray diffraction and 10 partial radial distribution functions from molecular dynamics simulations in one single structural model, generated by reverse Monte Carlo modeling. For the present solutions we show that the level of consistency between simulations that use simple pair potentials and experimental structure factors is at least semiquantitative for even the extremely highly concentrated CsF solutions. Remaining inconsistencies seem to be caused primarily by water-water distribution functions, whereas slightly problematic parts appear on the ion-oxygen partials, too. As a final result, we obtained particle configurations from reverse Monte Carlo modeling that were in quantitative agreement with both sets of diffraction data and most of the MD simulated partial radial distribution functions. From the particle coordinates, distributions of the number of first neighbors as well as angular correlation functions were calculated. The average number of water molecules around cations in both materials decreases from about 8.0 to about 5.1 as concentration increases, whereas the same quantity for the anions (X) changes from about 5.3 to about 3.7 in the case of CsF and from about 6.2 to about 4.0 in the case of CsI. The average angle of X···H-O particle arrangements, characteristic of anion-water hydrogen bonds, is closer to 180° than that found for O···H-O arrangements (water-water hydrogen bonds) at higher concentrations.
Chen, Yunjie; Roux, Benoît
2015-08-11
Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up
Membranes with Fluctuating Topology: Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Gompper, G.; Kroll, D. M.
1998-09-01
Much of the phase behavior observed in self-assembling amphiphilic systems can be understood in the context of ensembles of random surfaces. In this article, it is shown that Monte Carlo simulations of dynamically triangulated surfaces of fluctuating topology can be used to determine the structure and thermal behavior of sponge phases, as well as the sponge-to-lamellar transition in these systems. The effect of the saddle-splay modulus, κ¯, on the phase behavior is studied systematically for the first time. Our data provide strong evidence for a positive logarithmic renormalization of κ¯ this result is consistent with the lamellar-to-sponge transition observed in experiments for decreasing amphiphile concentration.
Choi, Myunghee; Chan, Vincent S.
2014-02-28
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
Error propagation in first-principles kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Matera, Sebastian
2017-04-01
First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.
McNally, Kevin; Cotton, Richard; Cocker, John; Jones, Kate; Bartels, Mike; Rick, David; Price, Paul; Loizou, George
2012-01-01
There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure.
McNally, Kevin; Cotton, Richard; Cocker, John; Jones, Kate; Bartels, Mike; Rick, David; Price, Paul; Loizou, George
2012-01-01
There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure. PMID:22719759
NASA Astrophysics Data System (ADS)
Lukšič, Miha; Hribar-Lee, Barbara; Baleón Tochimani, Sergio; Pizio, Orest
2012-01-01
We present a theoretical study of a quenched-annealed system in which an annealed component is the restricted primitive model electrolyte in a mixture with an uncharged hard sphere species, i.e. the solvent primitive model (SPM), whereas a disordered quenched medium is modelled as the restricted primitive model (RPM) electrolyte. The annealed mixture is in thermal and chemical equilibrium with an external reservoir containing the same SPM. The system is studied by using the replica Ornstein-Zernike (ROZ) integral equation theory complemented with the hypernetted chain (HNC) approximation and via the grand canonical Monte Carlo simulations. We are primarily interested in collecting computer simulation data and comparing them with theoretical predictions at room temperature (298.15 K). In terms of physical observables, our focus is in the selectivity effects of adsorption of the mixture described by the adsorption isotherms as well as by the composition isotherms. The influence of the ionic matrix density and of the bulk state of the SPM mixture on adsorption and selectivity are examined in detail. Besides, we analyse the dependence of the internal energy and the constant volume heat capacity on the conditions of adsorption. Finally, we explore briefly the effects of quenching conditions of the RPM matrix on the pair distribution functions of fluid ions in the mixture. In general, the theory is in a very good agreement with computer simulations.
Resist develop prediction by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun
2002-07-01
Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
NASA Astrophysics Data System (ADS)
Wilson, J. A.; Richardson, J. A.
2015-12-01
Traditional methods used to calculate recurrence rate of volcanism, such as linear regression, maximum likelihood and Weibull-Poisson distributions, are effective at estimating recurrence rate and confidence level, but these methods are unable to estimate uncertainty in recurrence rate through time. We propose a new model for estimating recurrence rate and uncertainty, Volcanic Event Recurrence Rate Model. VERRM is an algorithm that incorporates radiometric ages, volcanic stratigraphy and paleomagnetic data into a Monte Carlo simulation, generating acceptable ages for each event. Each model run is used to calculate recurrence rate using a moving average window. These rates are binned into discrete time intervals and plotted using the 5th, 50th and 95th percentiles. We present recurrence rates from Cima Volcanic Field (CA), Yucca Mountain (NV) and Arsia Mons (Mars). Results from Cima Volcanic Field illustrate how several K-Ar ages with large uncertainties obscure three well documented volcanic episodes. Yucca Mountain results are similar to published rates and illustrate the use of using the same radiometric age for multiple events in a spatially defined cluster. Arsia Mons results show a clear waxing/waning of volcanism through time. VERRM output may be used for a spatio-temporal model or to plot uncertainty in quantifiable parameters such as eruption volume or geochemistry. Alternatively, the algorithm may be reworked to constrain geomagnetic chrons. VERRM is implemented in Python 2.7 and takes advantage of NumPy, SciPy and matplotlib libraries for optimization and quality plotting presentation. A typical Monte Carlo simulation of 40 volcanic events takes a few minutes to couple hours to complete, depending on the bin size used to assign ages.
NASA Technical Reports Server (NTRS)
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
Wiebe, J; Ploquin, N
2014-08-15
Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4 (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.
NASA Astrophysics Data System (ADS)
Fougere, Nicolas; altwegg, kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Bockelee-Morvan, Dominique; Calmonte, Ursina; Capaccioni, Fabrizio; Combi, Michael R.; De Keyser, Johan; Debout, Vincent; Erard, Stéphane; Fiethe, Björn; Filacchione, Gianrico; Fink, Uwe; Fuselier, Stephen; Gombosi, T. I.; Hansen, Kenneth C.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Migliorini, Alessandra; Piccioni, Giuseppe; Rinaldi, Giovanna; Rubin, Martin; Shou, Yinsi; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; VIRTIS Team and ROSINA Team
2016-10-01
During the past few decades, modeling of cometary coma has known tremendous improvements notably with the increase of computer capacity. While the Haser model is still widely used for interpretation of cometary observations, its rather simplistic assumptions such as spherical symmetry and constant outflow velocity prevent it to explain some of the coma observations. Hence, more complex coma models have emerged taking full advantage of the numerical approach. The only method that can resolve all the flow regimes encountered in the coma due to the drastic changes of Knudsen numbers is the Direct Simulation Monte-Carlo (DSMC) approach.The data acquired by the instruments on board of the Rosetta spacecraft provides a large amount of observations regarding the spatial and temporal variations of comet 67P/Churyumov-Gerasimenko's coma. These measurements provide constraints that can be applied to the coma model in order to describe best the rarefied atmosphere of 67P. We present the last results of our 3D multi-species DSMC model using the Adaptive Mesh Particle Simulator (Tenishev et al. 2008 and 2011, Fougere 2014). The model uses a realistic nucleus shape model from the OSIRIS team and takes into account the self-shadowing created by its concavities. The gas flux at the surface of the nucleus is deduced from the relative orientation with respect to the Sun and an activity distribution that enables to simulate both the non-uniformity of the surface activity and the heterogeneities of the outgassing.The model results are compared to the ROSINA and VIRTIS observations. Progress in incorporating Rosetta measurements from the last half of the mission into our models will be presented. The good agreement between the model and these measurements from two very different techniques reinforces our understanding of the physical processes taking place in the coma.
McMillan, K; Bostani, M; McNitt-Gray, M; McCollough, C
2015-06-15
Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not
Accelerated Monte Carlo simulations with restricted Boltzmann machines
NASA Astrophysics Data System (ADS)
Huang, Li; Wang, Lei
2017-01-01
Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ramachandra, Ranjith M.; Panse, Ashish; Balla, Deepika; Yan, Jianhua; Carson, Richard E.
2010-04-01
We previously designed a component based 3-D PSF model to obtain a compact yet accurate system matrix for a dedicated human brain PET scanner. In this work, we adapted the model to a small animal PET scanner. Based on the model, we derived the system matrix for back-to-back gamma source in air, fluorine-18 and iodine-124 source in water by Monte Carlo simulation. The characteristics of the PSF model were evaluated and the performance of the newly derived system matrix was assessed by comparing its reconstructed images with the established reconstruction program provided on the animal PET scanner. The new system matrix showed strong PSF dependency on the line-of-response (LOR) incident angle and LOR depth. This confirmed the validity of the two components selected for the model. The effect of positron range on the system matrix was observed by comparing the PSFs of different isotopes. A simulated and an experimental hot-rod phantom study showed that the reconstruction with the proposed system matrix achieved better resolution recovery as compared to the algorithm provided by the manufacturer. Quantitative evaluation also showed better convergence to the expected contrast value at similar noise level. In conclusion, it has been shown that the system matrix derivation method is applicable to the animal PET system studied, suggesting that the method may be used for other PET systems and different isotope applications.
Monte Carlo Simulation of Surface Reactions
NASA Astrophysics Data System (ADS)
Brosilow, Benjamin J.
A Monte-Carlo study of the catalytic reaction of CO and O_2 over transition metal surfaces is presented, using generalizations of a model proposed by Ziff, Gulari and Barshad (ZGB). A new "constant -coverage" algorithm is described and applied to the model in order to elucidate the behavior near the model's first -order transition, and to draw an analogy between this transition and first-order phase transitions in equilibrium systems. The behavior of the model is then compared to the behavior of CO oxidation systems over Pt single-crystal catalysts. This comparison leads to the introduction of a new variation of the model in which one of the reacting species requires a large ensemble of vacant surface sites in order to adsorb. Further, it is shown that precursor adsorption and an effective Eley-Rideal mechanism must also be included in the model in order to obtain detailed agreement with experiment. Finally, variations of the model on finite and two component lattices are studied as models for low temperature CO oxidation over Noble Metal/Reducible Oxide and alloy catalysts.
Qin, N; Shen, C; Tian, Z; Jiang, S; Jia, X
2016-06-15
Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy, and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with
NASA Astrophysics Data System (ADS)
Shi, Feng; Wang, Dezhen; Ren, Chunsheng
2008-06-01
Atmospheric pressure discharge nonequilibrium plasmas have been applied to plasma processing with modern technology. Simulations of discharge in pure Ar and pure He gases at one atmospheric pressure by a high voltage trapezoidal nanosecond pulse have been performed using a one-dimensional particle-in-cell Monte Carlo collision (PIC-MCC) model coupled with a renormalization and weighting procedure (mapping algorithm). Numerical results show that the characteristics of discharge in both inert gases are very similar. There exist the effects of local reverse field and double-peak distributions of charged particles' density. The electron and ion energy distribution functions are also observed, and the discharge is concluded in the view of ionization avalanche in number. Furthermore, the independence of total current density is a function of time, but not of position.
Leonhard, Kai; Prausnitz, John M.; Radke, Clayton J.
2004-01-01
Amino acid residue–solvent interactions are required for lattice Monte Carlo simulations of model proteins in water. In this study, we propose an interaction-energy scale that is based on the interaction scale by Miyazawa and Jernigan. It permits systematic variation of the amino acid–solvent interactions by introducing a contrast parameter for the hydrophobicity, Cs, and a mean attraction parameter for the amino acids, ω. Changes in the interaction energies strongly affect many protein properties. We present an optimized energy parameter set for best representing realistic behavior typical for many proteins (fast folding and high cooperativity for single chains). Our optimal parameters feature a much weaker hydrophobicity contrast and mean attraction than does the original interaction scale. The proposed interaction scale is designed for calculating the behavior of proteins in bulk and at interfaces as a function of solvent characteristics, as well as protein size and sequence. PMID:14739322
Monte Carlo simulations of particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.
1994-01-01
The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.
Dong, Jing; Xiong, Wei; Chen, Yuancheng; Zhao, Yunfeng; Lu, Yang; Zhao, Di; Li, Wenyan; Liu, Yanhui; Chen, Xijing
2016-03-01
In this study, a population pharmacokinetic (PPK) model of biapenem in Chinese patients with lower respiratory tract infections (LRTIs) was developed and optimal dosage regimens based on Monte Carlo simulation were proposed. A total of 297 plasma samples from 124 Chinese patients were assayed chromatographically in a prospective, single-centre, open-label study, and pharmacokinetic parameters were analysed using NONMEN. Creatinine clearance (CLCr) was found to be the most significant covariate affecting drug clearance. The final PPK model was: CL (L/h)=9.89+(CLCr-66.56)×0.049; Vc (L)=13; Q (L/h)=8.74; and Vp (L)=4.09. Monte Carlo simulation indicated that for a target of ≥40% T>MIC (duration that the plasma level exceeds the causative pathogen's MIC), the biapenem pharmacokinetic/pharmacodynamic (PK/PD) breakpoint was 4μg/mL for doses of 0.3g every 6h (3-h infusion) and 1.2g (24-h continuous infusion). For a target of ≥80% T>MIC, the PK/PD breakpoint was 4μg/mL for a dose of 1.2g (24-h continuous infusion). The probability of target attainment (PTA) could not achieve ≥90% at the usual biapenem dosage regimen (0.3g every 12h, 0.5-h infusion) when the MIC of the pathogenic bacteria was 4μg/mL, which most likely resulted in unsatisfactory clinical outcomes in Chinese patients with LRTIs. Higher doses and longer infusion time would be appropriate for empirical therapy. When the patient's symptoms indicated a strong suspicion of Pseudomonas aeruginosa or Acinetobacter baumannii infection, it may be more appropriate for combination therapy with other antibacterial agents.
NASA Astrophysics Data System (ADS)
Soligo, Riccardo
In this work, the insight provided by our sophisticated Full Band Monte Carlo simulator is used to analyze the behavior of state-of-art devices like GaN High Electron Mobility Transistors and Hot Electron Transistors. Chapter 1 is dedicated to the description of the simulation tool used to obtain the results shown in this work. Moreover, a separate section is dedicated the set up of a procedure to validate to the tunneling algorithm recently implemented in the simulator. Chapter 2 introduces High Electron Mobility Transistors (HEMTs), state-of-art devices characterized by highly non linear transport phenomena that require the use of advanced simulation methods. The techniques for device modeling are described applied to a recent GaN-HEMT, and they are validated with experimental measurements. The main techniques characterization techniques are also described, including the original contribution provided by this work. Chapter 3 focuses on a popular technique to enhance HEMTs performance: the down-scaling of the device dimensions. In particular, this chapter is dedicated to lateral scaling and the calculation of a limiting cutoff frequency for a device of vanishing length. Finally, Chapter 4 and Chapter 5 describe the modeling of Hot Electron Transistors (HETs). The simulation approach is validated by matching the current characteristics with the experimental one before variations of the layouts are proposed to increase the current gain to values suitable for amplification. The frequency response of these layouts is calculated, and modeled by a small signal circuit. For this purpose, a method to directly calculate the capacitance is developed which provides a graphical picture of the capacitative phenomena that limit the frequency response in devices. In Chapter 5 the properties of the hot electrons are investigated for different injection energies, which are obtained by changing the layout of the emitter barrier. Moreover, the large signal characterization of the
Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.
2013-01-15
Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.
Monte Carlo Simulation of Quantum Critical Spin Systems
NASA Astrophysics Data System (ADS)
Troyer, Matthias
1998-03-01
The recent development of the loop algorithm(H.G. Evertz et al.), Phys. Rev. Lett. 70, 875 (1993); B.B. Beard and U.-J. Wiese, Phys. Rev. Lett. 77, 5130 (1996). for quantum Monte Carlo simulations has opened up a new field of problems that can be studied by quantum Monte Carlo. High precision simulations of phase transitions in quantum spin systems are now possible. In this talk we shall present results on two simulations of quantum phase transitions between a Néel ordered phase and a gapped resonating valence bond (RVB) phase in two and three spatial dimensions. The critical exponents for such a quantum phase transition have been calculated in two dimensions on a 1/5- depleted CaV_4O9 type lattice.(M. Troyer et al.), Phys. Rev. Lett. 76, 3822 (1996); J. Phys. Soc. Jpn. 66, 2957 (1997). Our results on large lattices are, in contrast to some of the previous simulations on smaller systems, consistent with a mapping to the non-linear sigma model and support the conjecture that the Berry phase terms are dangerously irrelevant. Another simulation in three spatial dimensions was motivated by experiments on the coupled spin ladder compound LaCuO_2.5. Early magnetic susceptibility measurements on this material were interpreted to be consistent with a spin gap of order 400K, while NMR and μSR measurements showed antiferromagnetic ordering at around T_N≈110K. Quantum Monte Carlo simulations were used to fit the experimental measurements and identified this material as a nearly quantum critical but ordered three-dimensional quantum Heisenberg antiferromagnet.(M. Troyer et al.), Phys. Rev. B 55, R6117 (1997); B. Normand and T.M. Rice, Phys. Rev. B 54, 7180 (1996).
Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.
Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J
2007-10-21
A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in
Morphological evolution of growing crystals - A Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz
1988-01-01
The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear
NASA Astrophysics Data System (ADS)
Monceau, Pascal
2006-09-01
The extension of the phase diagram of the q -state Potts model to noninteger dimension is investigated by means of Monte Carlo simulations on Sierpinski and Menger fractal structures. Both multicanonical and canonical simulations have been carried out with the help of the Wang-Landau and the Wolff cluster algorithms. Lower bounds are provided for the critical values qc of q where a first-order transition is expected in the cases of two structures whose fractal dimension is smaller than 2: The transitions associated with the seven-state and ten-state Potts models on Sierpinski carpets with fractal dimensions df≃1.8928 and df≃1.7925 , respectively, are shown to be second-order ones, the renormalization eigenvalue exponents yh are calculated, and bounds are provided for the renormalization eigenvalue exponents yt and the critical temperatures. Moreover, the results suggest that second-order transitions are expected to occur for very large values of q when the fractal dimension is lowered below 2—that is, in the case of hierarchically weakly connected systems with an infinite ramification order. At last, the transition associated with the four-state Potts model on a fractal structure with a dimension df≃2.631 is shown to be a weakly first-order one.
Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography
Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.
2000-02-01
The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.
Shen, Chuansheng; Chen, Hanshuang; Hou, Zhonghuai; Xin, Houwen
2011-06-01
Developing an effective coarse-grained (CG) approach is a promising way for studying dynamics on large size networks. In the present work, we have proposed a strength-based CG (s-CG) method to study critical phenomena of the Potts model on weighted complex networks. By merging nodes with close strengths together, the original network is reduced to a CG network with much smaller size, on which the CG Hamiltonian can be well defined. In particular, we make an error analysis and show that our s-CG approach satisfies the condition of statistical consistency, which demands that the equilibrium probability distribution of the CG model matches that of the microscopic counterpart. Extensive numerical simulations are performed on scale-free networks and random networks, without or with strength correlation, showing that this s-CG approach works very well in reproducing the phase diagrams, fluctuations, and finite-size effects of the microscopic model, while the d-CG approach proposed in our recent work [Phys. Rev. E 82, 011107 (2010)] does not.
Direct simulation Monte Carlo investigation of hydrodynamic instabilities in gases
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.; Plimpton, S. J.
2016-11-01
The Rayleigh-Taylor instability (RTI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, two-dimensional and three-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode-perturbed interfaces between two atmospheric-pressure monatomic gases. The DSMC simulations reproduce all qualitative features of the RTI and are in reasonable quantitative agreement with existing theoretical and empirical models in the linear, nonlinear, and self-similar regimes. At late times, the instability is seen to exhibit a self-similar behavior, in agreement with experimental observations. For the conditions simulated, diffusion can influence the initial instability growth significantly.
Gustavson, Kristin; Borren, Ingrid
2014-12-17
Medical researchers often use longitudinal observational studies to examine how risk factors predict change in health over time. Selective attrition and inappropriate modeling of regression toward the mean (RTM) are two potential sources of bias in such studies. The current study used Monte Carlo simulations to examine bias related to selective attrition and inappropriate modeling of RTM in the study of prediction of change. This was done for multiple regression (MR) and change score analysis. MR provided biased results when attrition was dependent on follow-up and baseline variables to quite substantial degrees, while results from change score analysis were biased when attrition was more strongly dependent on variables at one time point than the other. A positive association between the predictor and change in the health variable was underestimated in MR and overestimated in change score analysis due to selective attrition. Inappropriate modeling of RTM, on the other hand, lead to overestimation of this association in MR and underestimation in change score analysis. Hence, selective attrition and inappropriate modeling of RTM biased the results in opposite directions. MR and change score analysis are both quite robust against selective attrition. The interplay between selective attrition and inappropriate modeling of RTM emphasizes that it is not an easy task to assess the degree to which obtained results from empirical studies are over- versus underestimated due to attrition or RTM. Researchers should therefore use modern techniques for handling missing data and be careful to model RTM appropriately.
Estimating return period of landslide triggering by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Peres, D. J.; Cancelliere, A.
2016-10-01
Assessment of landslide hazard is a crucial step for landslide mitigation planning. Estimation of the return period of slope instability represents a quantitative method to map landslide triggering hazard on a catchment. The most common approach to estimate return periods consists in coupling a triggering threshold equation, derived from an hydrological and slope stability process-based model, with a rainfall intensity-duration-frequency (IDF) curve. Such a traditional approach generally neglects the effect of rainfall intensity variability within events, as well as the variability of initial conditions, which depend on antecedent rainfall. We propose a Monte Carlo approach for estimating the return period of shallow landslide triggering which enables to account for both variabilities. Synthetic hourly rainfall-landslide data generated by Monte Carlo simulations are analysed to compute return periods as the mean interarrival time of a factor of safety less than one. Applications are first conducted to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to evaluate the traditional IDF-based method by comparison with the Monte Carlo one. Results show that return period is affected significantly by variability of both rainfall intensity within events and of initial conditions, and that the traditional IDF-based approach may lead to an overestimation of the return period of landslide triggering, or, in other words, a non-conservative assessment of landslide hazard.
A New Approach to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
NASA Astrophysics Data System (ADS)
Ozaki, Hiroyuki; Kuratani, Kentaro; Sano, Hikaru; Kiyobayashi, Tetsu
2017-07-01
Simulating three transport phenomena—ionic conductivity, viscosity, and self-diffusion coefficient—in a common Monte-Carlo framework, we discuss their relationship to the intermolecular interactions of electrolyte solutions at high concentrations (C /mol l-1 ˜ 1 ). The simulation is predicated on a pseudolattice model of the solution. The ions and solvents (collectively termed "molecules") are considered dimensionless points occupying the lattice sites. The molecular transport is realized by a repetition of swapping two adjacent molecules by the stochastic Gibbs sampling process based on simple intermolecular interactions. The framework has been validated by the fact that the simulated ionic conductivity and dynamic viscosity of 1:1- and 2:1-salts qualitatively well represent the experimental data. The magnitude of the Coulombic interaction itself is not reflected in the ionic conductivity, but the extent to which the Coulombic interaction is shielded by the dielectric constant has a significant influence. On the other hand, the dielectric constant barely influences the viscosity, while the magnitude of the Coulombic interaction is directly reflected in the viscosity.
Monte Carlo simulation of amorphous selenium imaging detectors
NASA Astrophysics Data System (ADS)
Fang, Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo
2010-04-01
We present a Monte Carlo (MC) simulation method for studying the signal formation process in amorphous Selenium (a-Se) imaging detectors for design validation and optimization of direct imaging systems. The assumptions and limitations of the proposed and previous models are examined. The PENELOPE subroutines for MC simulation of radiation transport are used to model incident x-ray photon and secondary electron interactions in the photoconductor. Our simulation model takes into account applied electric field, atomic properties of the photoconductor material, carrier trapping by impurities, and bimolecular recombination between drifting carriers. The particle interaction cross-sections for photons and electrons are generated for Se over the energy range of medical imaging applications. Since inelastic collisions of secondary electrons lead to the creation of electron-hole pairs in the photoconductor, the electron inelastic collision stopping power is compared for PENELOPE's Generalized Oscillator Strength model with the established EEDL and NIST ESTAR databases. Sample simulated particle tracks for photons and electrons in Se are presented, along with the energy deposition map. The PENEASY general-purpose main program is extended with custom transport subroutines to take into account generation and transport of electron-hole pairs in an electromagnetic field. The charge transport routines consider trapping and recombination, and the energy required to create a detectable electron-hole pair can be estimated from simulations. This modular simulation model is designed to model complete image formation.
Shin, J; Merchant, T E; Lee, S; Li, Z; Shin, D; Farr, J B
2015-06-15
Purpose: To reconstruct phase-space information upstream of patient specific collimators for Monte Carlo simulations using only radiotherapy planning system data. Methods: The proton energies are calculated based on residual ranges, e.g., sum of prescribed ranges in a patient and SSD. The Kapchinskij and Vladimirskij (KV) distribution was applied to sample proton’s x-y positions and momentum direction and the beam shape was assumed to be a circle. Free parameters, e.g., the initial energy spread and the emittance of KV distribution were estimated from the benchmarking with commissioning data in a commercial treatment planning system for an operational proton therapy center. The number of histories, which defines the height of individual pristine Bragg peaks (BP) of Spread-out Bragg peak (SOBP), are weighted based on beam current modulation and a correction factor is applied to take into account the fluence reduction as the residual range decreases due to the rotation of the range modulator wheel. The timedependent behaviors, e.g., the changes of the residual range and histories per a pristine BP, are realized by utilizing TOPAS (Tool for Particle Simulation). Results: Benchmarking simulations for selected SOBPs ranging 7.5 cm to 15.5 cm matched within 2 mm in range and up to 5 mm in SOBP width against measurement data in water phantom. We found this model tends to underestimate entrance dose by about 5 % in comparison to measurement. This was attributed to the situation that the energy distribution used in the model was limited in its granularity at the limit of single energy spectrum for the narrow angle modulator steps used in the proximal pull back region of the SOBPs. Conclusion: Within these limitations the source modeling method proved itself an acceptable alternative of a full treatment head simulation when the machine geometry and materials information are not available.
NASA Astrophysics Data System (ADS)
Shrestha, Suman; Vedantham, Srinivasan; Karellas, Andrew
2017-03-01
In digital breast tomosynthesis and digital mammography, the x-ray beam filter material and thickness vary between systems. Replacing K-edge filters with Al was investigated with the intent to reduce exposure duration and to simplify system design. Tungsten target x-ray spectra were simulated with K-edge filters (50 µm Rh; 50 µm Ag) and Al filters of varying thickness. Monte Carlo simulations were conducted to quantify the x-ray scatter from various filters alone, scatter-to-primary ratio (SPR) with compressed breasts, and to determine the radiation dose to the breast. These data were used to analytically compute the signal-difference-to-noise ratio (SDNR) at unit (1 mGy) mean glandular dose (MGD) for W/Rh and W/Ag spectra. At SDNR matched between K-edge and Al filtered spectra, the reductions in exposure duration and MGD were quantified for three strategies: (i) fixed Al thickness and matched tube potential in kilovolts (kV); (ii) fixed Al thickness and varying the kV to match the half-value layer (HVL) between Al and K-edge filtered spectra; and, (iii) matched kV and varying the Al thickness to match the HVL between Al and K-edge filtered spectra. Monte Carlo simulations indicate that the SPR with and without the breast were not different between Al and K-edge filters. Modelling for fixed Al thickness (700 µm) and kV matched to K-edge filtered spectra, identical SDNR was achieved with 37–57% reduction in exposure duration and with 2–20% reduction in MGD, depending on breast thickness. Modelling for fixed Al thickness (700 µm) and HVL matched by increasing the kV over (0,4) range, identical SDNR was achieved with 62–65% decrease in exposure duration and with 2–24% reduction in MGD, depending on breast thickness. For kV and HVL matched to K-edge filtered spectra by varying Al filter thickness over (700, 880) µm range, identical SDNR was achieved with 23–56% reduction in exposure duration and 2–20% reduction in MGD, depending on breast thickness
Shrestha, Suman; Vedantham, Srinivasan; Karellas, Andrew
2017-03-07
In digital breast tomosynthesis and digital mammography, the x-ray beam filter material and thickness vary between systems. Replacing K-edge filters with Al was investigated with the intent to reduce exposure duration and to simplify system design. Tungsten target x-ray spectra were simulated with K-edge filters (50 µm Rh; 50 µm Ag) and Al filters of varying thickness. Monte Carlo simulations were conducted to quantify the x-ray scatter from various filters alone, scatter-to-primary ratio (SPR) with compressed breasts, and to determine the radiation dose to the breast. These data were used to analytically compute the signal-difference-to-noise ratio (SDNR) at unit (1 mGy) mean glandular dose (MGD) for W/Rh and W/Ag spectra. At SDNR matched between K-edge and Al filtered spectra, the reductions in exposure duration and MGD were quantified for three strategies: (i) fixed Al thickness and matched tube potential in kilovolts (kV); (ii) fixed Al thickness and varying the kV to match the half-value layer (HVL) between Al and K-edge filtered spectra; and, (iii) matched kV and varying the Al thickness to match the HVL between Al and K-edge filtered spectra. Monte Carlo simulations indicate that the SPR with and without the breast were not different between Al and K-edge filters. Modelling for fixed Al thickness (700 µm) and kV matched to K-edge filtered spectra, identical SDNR was achieved with 37-57% reduction in exposure duration and with 2-20% reduction in MGD, depending on breast thickness. Modelling for fixed Al thickness (700 µm) and HVL matched by increasing the kV over (0,4) range, identical SDNR was achieved with 62-65% decrease in exposure duration and with 2-24% reduction in MGD, depending on breast thickness. For kV and HVL matched to K-edge filtered spectra by varying Al filter thickness over (700, 880) µm range, identical SDNR was achieved with 23-56% reduction in exposure duration and 2-20% reduction in MGD, depending on breast thickness. These
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Wu, H; Baynes, R E; Leavens, T; Tell, L A; Riviere, J E
2013-06-01
The objective of this study was to develop a population pharmacokinetic (PK) model and predict tissue residues and the withdrawal interval (WDI) of flunixin in cattle. Data were pooled from published PK studies in which flunixin was administered through various dosage regimens to diverse populations of cattle. A set of liver data used to establish the regulatory label withdrawal time (WDT) also were used in this study. Compartmental models with first-order absorption and elimination were fitted to plasma and liver concentrations by a population PK modeling approach. Monte Carlo simulations were performed with the population mean and variabilities of PK parameters to predict liver concentrations of flunixin. The PK of flunixin was described best by a 3-compartment model with an extra liver compartment. The WDI estimated in this study with liver data only was the same as the label WDT. However, a longer WDI was estimated when both plasma and liver data were included in the population PK model. This study questions the use of small groups of healthy animals to determine WDTs for drugs intended for administration to large diverse populations. This may warrant a reevaluation of the current procedure for establishing WDT to prevent violative residues of flunixin.
NASA Technical Reports Server (NTRS)
Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.
2003-01-01
The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.
Lou, K; Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y; Clark, J
2014-06-01
Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is
Monte Carlo modeling and meteor showers
NASA Technical Reports Server (NTRS)
Kulikova, N. V.
1987-01-01
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.
NASA Technical Reports Server (NTRS)
Combi, Michael R.
2004-01-01
In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic (MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important. At the University of Michigan we have an established base of experience and expertise in numerical simulations based on particle codes which address these physical regimes. The Principal Investigator, Dr. Michael Combi, has over 20 years of experience in the development of particle-kinetic and hybrid kinetichydrodynamics models and their direct use in data analysis. He has also worked in ground-based and space-based remote observational work and on spacecraft instrument teams. His research has involved studies of cometary atmospheres and ionospheres and their interaction with the solar wind, the neutral gas clouds escaping from Jupiter s moon Io, the interaction of the atmospheres/ionospheres of Io and Europa with Jupiter s corotating magnetosphere, as well as Earth s ionosphere. This report describes our progress during the year. The contained in section 2 of this report will serve as the basis of a paper describing the method and its application to the cometary coma that will be continued under a research and analysis grant that supports various applications of theoretical comet models to understanding the
Cluster Monte Carlo simulations of the nematic-isotropic transition
NASA Astrophysics Data System (ADS)
Priezjev, N. V.; Pelcovits, Robert A.
2001-06-01
We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.
Probabilistic Assessments of the Plate Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Ismail, A. E.; Ariffin, A. K.; Abdullah, S.; Ghazali, M. J.
2011-02-01
This paper presents the probabilistic analysis of the plate with a hole using several multiaxial high cycle fatigue criteria (MHFC). Dang Van, Sines, Crossland criteria were used and von Mises criterion was also considered for comparison purpose. Parametric finite element model of the plate was developed and several important random variable parameters were selected and Latin Hypercube Sampling Monte-Carlo Simulation (LHS-MCS) was used for probabilistic analysis tool. It was found that, different structural reliability and sensitivity factors were obtained using different failure criteria. According to the results multiaxial fatigue criteria are the most significant criteria need to be considered in assessing all the structural behavior especially under complex loadings.
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
NASA Astrophysics Data System (ADS)
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
Cluster Monte Carlo simulations of the nematic-isotropic transition.
Priezjev, N V; Pelcovits, R A
2001-06-01
We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.
USDA-ARS?s Scientific Manuscript database
A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...
Monte Carlo simulation of quantum Zeno effect in the brain
NASA Astrophysics Data System (ADS)
Georgiev, Danko
2015-12-01
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Efficient kinetic Monte Carlo simulation of annealing in semiconductor materials
NASA Astrophysics Data System (ADS)
Hargrove, Paul Hamilton
As the semiconductor manufacturing industry advances, the length scales of devices are shrinking rapidly, in accordance with the predictions of Moore's Law. As the device dimensions shrink the importance of predictive process modeling to the development of the production process is growing. Of particular importance are predictive models which can be applied to process conditions not easily accessible via experiment. Therefore the importance of models based on physical understanding are gaining importance versus models based on empirical fits alone. One promising research area in physical-based models is kinetic Monte Carlo (kMC) modeling of atomistic processes. This thesis explores kMC modeling of annealing and diffusion processes. After providing the necessary background to understand and motivate the research, a detailed review of simulation using this class of models is presented which exposes the motivation for using these models and establishes the state of the field. The author provides a user's manual for ANISRA ( ANnealIng Simulation libRAry), a computer code for on-lattice kMC simulations. This library is intended as a reusable tool for the development of simulation codes for atomistic models covering a wide variety of problems. Thus care has been taken to separate the core functionality of a simulation from the specification of the model. This thesis also compares the performance of data structures for the kMC simulation problem and recommends some novel approaches. These recommendations are applicable to a wider class of model than is ANISRA, and thus of potential interest even to researchers who implement their own simulators. Three example simulations are built from ANISRA and are presented to show the applicability of this class of model to problems of interest in semiconductor process modeling. The differences between the models simulated display the versatility of the code library. The small amount of code written to construct and modify these
Monte Carlo simulation of large electron fields
Faddegon, Bruce A; Perl, Joseph; Asai, Makoto
2010-01-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different “physics lists,” were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy. PMID:18296775
NASA Astrophysics Data System (ADS)
Stratis, A.; Zhang, G.; Jacobs, R.; Bogaerts, R.; Bosmans, H.
2016-12-01
In order to carry out Monte Carlo (MC) dosimetry studies, voxel phantoms, modeling human anatomy, and organ-based segmentation of CT image data sets are applied to simulation frameworks. The resulting voxel phantoms preserve patient CT acquisition geometry; in the case of head voxel models built upon head CT images, the head support with which CT scanners are equipped introduces an inclination to the head, and hence to the head voxel model. In dental cone beam CT (CBCT) imaging, patients are always positioned in such a way that the Frankfort line is horizontal, implying that there is no head inclination. The orientation of the head is important, as it influences the distance of critical radiosensitive organs like the thyroid and the esophagus from the x-ray tube. This work aims to propose a procedure to adjust head voxel phantom orientation, and to investigate the impact of head inclination on organ doses in dental CBCT MC dosimetry studies. The female adult ICRP, and three in-house-built paediatric voxel phantoms were in this study. An EGSnrc MC framework was employed to simulate two commonly used protocols; a Morita Accuitomo 170 dental CBCT scanner (FOVs: 60 × 60 mm2 and 80 × 80 mm2, standard resolution), and a 3D Teeth protocol (FOV: 100 × 90 mm2) in a Planmeca Promax 3D MAX scanner. Result analysis revealed large absorbed organ dose differences in radiosensitive organs between the original and the geometrically corrected voxel models of this study, ranging from -45.6% to 39.3%. Therefore, accurate dental CBCT MC dose calculations require geometrical adjustments to be applied to head voxel models.
Kern, Christoph
2016-03-23
This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I
2015-05-01
This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael
2013-05-01
In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.
Brolin, Gustav; Gleisner, Katarina Sjögreen; Ljungberg, Michael
2013-05-21
In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for (99m)Tc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.
NASA Astrophysics Data System (ADS)
Lamperski, S.; Płuciennik, M.
2011-01-01
The recently developed inverse grand-canonical Monte Carlo technique (IGCMC) (S. Lamperski. Molecular Simulation 33, 1193 (2007)) and the MSA theory are applied to calculate the individual activity coefficients of ions and solvent for a solvent primitive model (SPM) electrolyte. In the SPM electrolyte model the anions, cations and solvent molecules are represented by hard spheres immersed in a dielectric continuum whose permittivity is equal to that of the solvent. The ions have a point electric charge embedded at the centre. A simple 1:1 aqueous electrolyte is considered. The ions are hydrated while the water molecules form clusters modelled by hard spheres of diameter d s. The diameter d s depends on the dissolved salt and is determined by fitting the mean activity coefficient ln γ ± calculated from IGCMC and from the MSA to the experimental data. A linear correlation is observed between d s and the Marcus parameter ΔG HB, which describes the ion influence on the water association.
Kinetic Monte Carlo simulation of titin unfolding
NASA Astrophysics Data System (ADS)
Makarov, Dmitrii E.; Hansma, Paul K.; Metiu, Horia
2001-06-01
Recently, it has become possible to unfold a single protein molecule titin, by pulling it with an atomic-force-microscope tip. In this paper, we propose and study a stochastic kinetic model of this unfolding process. Our model assumes that each immunoglobulin domain of titin is held together by six hydrogen bonds. The external force pulls on these bonds and lowers the energy barrier that prevents the hydrogen bond from breaking; this increases the rate of bond breaking and decreases the rate of bond healing. When all six bonds are broken, the domain unfolds. Since the experiment controls the pulling rate, not the force, the latter is calculated from a wormlike chain model for the protein. In the limit of high pulling rate, this kinetic model is solved by a novel simulation method. In the limit of low pulling rate, we develop a quasiequilibrium rate theory, which is tested by simulations. The results are in agreement with the experiments: the distribution of the unfolding force and the dependence of the mean unfolding force on the pulling rate are similar to those measured. The simulations also explain why the work of the force to break bonds is less than the bond energy and why the breaking-force distribution varies from sample to sample. We suggest that one can synthesize polymers that are well described by our model and that they may have unusual mechanical properties.
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-01-01
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulation that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.
Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor
2015-03-01
Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.
Monte Carlo simulation of proton track structure in biological matter
NASA Astrophysics Data System (ADS)
Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.; Fojón, Omar A.; Hanssen, Jocelyn; Rivarola, Roberto D.; Senot, Philippe; Champion, Christophe
2017-05-01
Understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V - devoted to the modeling of the slowing-down of 10 keV-100 MeV protons in both water and DNA - where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections. Contribution to the Topical Issue "Many Particle Spectroscopy of Atoms, Molecules, Clusters and Surfaces", edited by A.N. Grum-Grzhimailo, E.V. Gryzlova, Yu V. Popov, and A.V. Solov'yov.
NASA Astrophysics Data System (ADS)
Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.
2016-12-01
Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical
Monte Carlo simulation of laser beam scattering by water droplets
NASA Astrophysics Data System (ADS)
Wang, Biao; Tong, Guang-de; Lin, Jia-xuan
2013-09-01
Monte Carlo simulation of laser beam scattering in discrete water droplets is present and the temporal profile of LIDAR signal scattered from random distributed water droplets such as raindrop and fog is acquired. A photon source model is developed in the simulation for laser beam of arbitrary intensity distribution. Mie theory and geometrical optics approximation is used to calculate optical parameters, such as scattering coefficient, Aledo and average asymmetry factor, for water droplets of variable size with gamma distribution. The scattering angle is calculated using the probability distribution given by Henyey-Greenstein phase function. The model solving semi-infinite homogeneous media problem is capable of handling a variety of geometries and arbitrary spatio-temporal pulse profiles.
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.
2014-01-01
The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
On the time scale associated with Monte Carlo simulations.
Bal, Kristof M; Neyts, Erik C
2014-11-28
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.
James Webb Space Telescope (JWST) Stationkeeping Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra; Yu, Wayne
2014-01-01
The James Webb Space Telescope (JWST) will launch in 2018 into a Libration Point Orbit (LPO) around the Sun-EarthMoon (SEM) L2 point, with a planned mission lifetime of 11 years. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
On the time scale associated with Monte Carlo simulations
Bal, Kristof M. Neyts, Erik C.
2014-11-28
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.
Deterministic sensitivity analysis for first-order Monte Carlo simulations: a technical note.
Geisler, Benjamin P; Siebert, Uwe; Gazelle, G Scott; Cohen, David J; Göhler, Alexander
2009-01-01
Monte Carlo microsimulations have gained increasing popularity in decision-analytic modeling because they can incorporate discrete events. Although deterministic sensitivity analyses are essential for interpretation of results, it remains difficult to combine these alongside Monte Carlo simulations in standard modeling packages without enormous time investment. Our purpose was to facilitate one-way deterministic sensitivity analysis of TreeAge Markov state-transition models requiring first-order Monte Carlo simulations. Using TreeAge Pro Suite 2007 and Microsoft Visual Basic for EXCEL, we constructed a generic script that enables one to perform automated deterministic one-way sensitivity analyses in EXCEL employing microsimulation models. In addition, we constructed a generic EXCEL-worksheet that allows for use of the script with little programming knowledge. Linking TreeAge Pro Suite 2007 and Visual Basic enables the performance of deterministic sensitivity analyses of first-order Monte Carlo simulations. There are other potentially interesting applications for automated analysis.
Papadimitroulas, P; Kostou, T; Kagadis, G; Loudos, G
2015-06-15
Purpose: The purpose of the present study was to quantify, evaluate the impact of cardiac and respiratory motion on clinical nuclear imaging protocols. Common SPECT and scintigraphic scans are studied using Monte Carlo (MC) simulations, comparing the resulted images with and without motion. Methods: Realistic simulations were executed using the GATE toolkit and the XCAT anthropomorphic phantom as a reference model for human anatomy. Three different radiopharmaceuticals based on 99mTc were studied, namely 99mTc-MDP, 99mTc—N—DBODC and 99mTc—DTPA-aerosol for bone, myocardium and lung scanning respectively. The resolution of the phantom was set to 3.5 mm{sup 3}. The impact of the motion on spatial resolution was quantified using a sphere with 3.5 mm diameter and 10 separate time frames, in the ECAM modeled SPECT scanner. Finally, respiratory motion impact on resolution and imaging of lung lesions was investigated. The MLEM algorithm was used for data reconstruction, while the literature derived biodistributions of the pharmaceuticals were used as activity maps in the simulations. Results: FWHM was extracted for a static and a moving sphere which was ∼23 cm away from the entrance of the SPECT head. The difference in the FWHM was 20% between the two simulations. Profiles in thorax were compared in the case of bone scintigraphy, showing displacement and blurring of the bones when respiratory motion was inserted in the simulation. Large discrepancies were noticed in the case of myocardium imaging when cardiac motion was incorporated during the SPECT acquisition. Finally the borders of the lungs are blurred when respiratory motion is included resulting to a dislocation of ∼2.5 cm. Conclusion: As we move to individualized imaging and therapy procedures, quantitative and qualitative imaging is of high importance in nuclear diagnosis. MC simulations combined with anthropomorphic digital phantoms can provide an accurate tool for applications like motion correction
Choi, M.; Chan, V. S.; Lao, L. L.; Pinsker, R. I.; Green, D.; Berry, L. A.; Jaeger, F.; Park, J. M.; Heidbrink, W. W.; Liu, D.; Podesta, M.; Harvey, R.; Smithe, D. N.; Bonoli, P.
2010-05-15
The five-dimensional finite-orbit Monte Carlo code ORBIT-RF[M. Choi et al., Phys. Plasmas 12, 1 (2005)] is successfully coupled with the two-dimensional full-wave code all-orders spectral algorithm (AORSA) [E. F. Jaeger et al., Phys. Plasmas 13, 056101 (2006)] in a self-consistent way to achieve improved predictive modeling for ion cyclotron resonance frequency (ICRF) wave heating experiments in present fusion devices and future ITER [R. Aymar et al., Nucl. Fusion 41, 1301 (2001)]. The ORBIT-RF/AORSA simulations reproduce fast-ion spectra and spatial profiles qualitatively consistent with fast ion D-alpha [W. W. Heidbrink et al., Plasma Phys. Controlled Fusion 49, 1457 (2007)] spectroscopic data in both DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] and National Spherical Torus Experiment [M. Ono et al., Nucl. Fusion 41, 1435 (2001)] high harmonic ICRF heating experiments. This work verifies that both finite-orbit width effect of fast-ion due to its drift motion along the torus and iterations between fast-ion distribution and wave fields are important in modeling ICRF heating experiments.
Choi, M.; Green, David L; Heidbrink, W. W.; Harvey, R. W.; Liu, D.; Chan, V. S.; Berry, Lee A; Jaeger, Erwin Frederick; Lao, L.L.; Pinsker, R. I.; Podesta, M.; Smithe, D. N.; Park, J. M.; Bonoli, P.
2010-01-01
The five-dimensional finite-orbit Monte Carlo code ORBIT-RF [M. Choi , Phys. Plasmas 12, 1 (2005)] is successfully coupled with the two-dimensional full-wave code all-orders spectral algorithm (AORSA) [E. F. Jaeger , Phys. Plasmas 13, 056101 (2006)] in a self-consistent way to achieve improved predictive modeling for ion cyclotron resonance frequency (ICRF) wave heating experiments in present fusion devices and future ITER [R. Aymar , Nucl. Fusion 41, 1301 (2001)]. The ORBIT-RF/AORSA simulations reproduce fast-ion spectra and spatial profiles qualitatively consistent with fast ion D-alpha [W. W. Heidbrink , Plasma Phys. Controlled Fusion 49, 1457 (2007)] spectroscopic data in both DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] and National Spherical Torus Experiment [M. Ono , Nucl. Fusion 41, 1435 (2001)] high harmonic ICRF heating experiments. This work verifies that both finite-orbit width effect of fast-ion due to its drift motion along the torus and iterations between fast-ion distribution and wave fields are important in modeling ICRF heating experiments. (C) 2010 American Institute of Physics. [doi:10.1063/1.3314336
The t-J model of hard-core bosons in slave-particle representation and its Monte-Carlo simulations
NASA Astrophysics Data System (ADS)
Nakano, Yuki; Ishima, Takumi; Kobayashi, Naohiro; Sakakibara, Kazuhiko; Ichinose, Ikuo; Matsui, Tetsuo
2012-12-01
We study the system of hard-core bosons (HCB) with two species in the three-dimensional lattice at finite temperatures. In the strong-correlation limit, the system becomes the bosonic t-J model, that is, the t-J model of “bosonic electrons”. The bosonic “electron” operator Bxσ at the site x with a two-component spin σ(= 1, 2***) is treated as a HCB operator, and represented by a composite of two slave particles; a spinon described by a Schwinger boson (CP1 boson) zxσ and a holon described by a HCB field φx as Bxσ = φ†xzxσ.*** This φx is again represented by another CP1 quasi-spinon operator ωxa*** (a = 1, 2***). The phase diagrams of the resulting double CP1 system obtained by Monte Carlo simulations involve first-order and second-order phase boundaries. We present in detail the techniques and algorithm to reduce the hysteresis and locate the first-order transition points.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy
Tennant, Sarah J.; Burgess, Donna R.; Rybak, Jeffrey M.; Martin, Craig A.; Burgess, David S.
2015-01-01
Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA) of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs) from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets. PMID:27025644
Determining MTF of digital detector system with Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee
2005-04-01
We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.
Monte Carlo simulation of the terrestrial hydrogen exosphere
Hodges, R.R. Jr.
1994-12-01
Methods for Monte Carlo simulation of planetary exospheres have evolved from early work on the lunar atmosphere, where the regolith surface provides a well defined exobase. A major limitation of the successor simulations of the exospheres of Earth and Venus is the use of an exobase surface as an artifice to separate the collisional processes of the thermosphere from a collisionles exosphere. In this paper a new generalized approach to exosphere simulation is described, wherein the exobase is replaced by a barometric depletion of the major constitents of the thermosphere. Exospheric atoms in the thermosphere-exosphere transition region, and in the outer exosphere as well, travel in ballistic trajectories that are interrupted by collisons with the background gas, and by charge exchange interactions with ionospheric particles. The modified simulator has been applied to the terrestrial hydrogen exosphere problem, using velocity dependent differential cross sections to provide statistically correct collisional scattering in H-O and H-H(+) interactions. Global models are presented for both solstice and equinox over the effective solar cycle range of the F{sub 10.7} index (80 to 230). Simulation results show significant differences with previous terrestrial exosphere models, as well as with the H distributions of the MSIS-86 thermosphere model.
A study on tetrahedron-based inhomogeneous Monte Carlo optical simulation.
Shen, Haiou; Wang, Ge
2010-12-03
Monte Carlo (MC) simulation is widely recognized as a gold standard in biophotonics for its high accuracy. Here we analyze several issues associated with tetrahedron-based optical Monte Carlo simulation in the context of TIM-OS, MMCM, MCML, and CUDAMCML in terms of accuracy and efficiency. Our results show that TIM-OS has significant better performance in the complex geometry cases and has comparable performance with CUDAMCML in the multi-layered tissue model.
Morton, April M; Piburn, Jesse O; McManamay, Ryan A; Nagle, Nicholas N; Stewart, Robert N
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
A study on tetrahedron-based inhomogeneous Monte Carlo optical simulation
Shen, Haiou; Wang, Ge
2011-01-01
Monte Carlo (MC) simulation is widely recognized as a gold standard in biophotonics for its high accuracy. Here we analyze several issues associated with tetrahedron-based optical Monte Carlo simulation in the context of TIM-OS, MMCM, MCML, and CUDAMCML in terms of accuracy and efficiency. Our results show that TIM-OS has significant better performance in the complex geometry cases and has comparable performance with CUDAMCML in the multi-layered tissue model. PMID:21326634
ERIC Educational Resources Information Center
Nylund, Karen L.; Asparouhov, Tihomir; Muthen, Bengt O.
2007-01-01
Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study…
ERIC Educational Resources Information Center
Nylund, Karen L.; Asparouhov, Tihomir; Muthen, Bengt O.
2007-01-01
Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study…
NASA Astrophysics Data System (ADS)
Obot, I. B.; Kaya, Savaş; Kaya, Cemal; Tüzün, Burak
2016-06-01
DFT and Monte Carlo simulation were performed on three Schiff bases namely, 4-(4-bromophenyl)-N‧-(4-methoxybenzylidene)thiazole-2-carbohydrazide (BMTC), 4-(4-bromophenyl)-N‧-(2,4-dimethoxybenzylidene)thiazole-2-carbohydrazide (BDTC), 4-(4-bromophenyl)-N‧-(4-hydroxybenzylidene)thiazole-2-carbohydrazide (BHTC) recently studied as corrosion inhibitor for steel in acid medium. Electronic parameters relevant to their inhibition activity such as EHOMO, ELUMO, Energy gap (ΔE), hardness (η), softness (σ), the absolute electronegativity (χ), proton affinity (PA) and nucleophilicity (ω) etc., were computed and discussed. Monte Carlo simulations were applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are in most cases in agreement with experimental results.
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
Residual entropy of ices and clathrates from Monte Carlo simulation
Kolafa, Jiří
2014-05-28
We calculated the residual entropy of ices (Ih, Ic, III, V, VI) and clathrates (I, II, H), assuming the same energy of all configurations satisfying the Bernal–Fowler ice rules. The Metropolis Monte Carlo simulations in the range of temperatures from infinity to a size-dependent threshold were followed by the thermodynamic integration. Convergence of the simulation and the finite-size effects were analyzed using the quasichemical approximation and the Debye–Hückel theory applied to the Bjerrum defects. The leading finite-size error terms, ln N/N, 1/N, and for the two-dimensional square ice model also 1/N{sup 3/2}, were used for an extrapolation to the thermodynamic limit. Finally, we discuss the influence of unequal energies of proton configurations.
Monte Carlo simulations of nanoscale focused neon ion beam sputtering.
Timilsina, Rajendra; Rack, Philip D
2013-12-13
A Monte Carlo simulation is developed to model the physical sputtering of aluminum and tungsten emulating nanoscale focused helium and neon ion beam etching from the gas field ion microscope. Neon beams with different beam energies (0.5-30 keV) and a constant beam diameter (Gaussian with full-width-at-half-maximum of 1 nm) were simulated to elucidate the nanostructure evolution during the physical sputtering of nanoscale high aspect ratio features. The aspect ratio and sputter yield vary with the ion species and beam energy for a constant beam diameter and are related to the distribution of the nuclear energy loss. Neon ions have a larger sputter yield than the helium ions due to their larger mass and consequently larger nuclear energy loss relative to helium. Quantitative information such as the sputtering yields, the energy-dependent aspect ratios and resolution-limiting effects are discussed.
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2016-01-01
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2016-01-01
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
Monte Carlo simulations of systems with complex energy landscapes
NASA Astrophysics Data System (ADS)
Wüst, T.; Landau, D. P.; Gervais, C.; Xu, Y.
2009-04-01
Non-traditional Monte Carlo simulations are a powerful approach to the study of systems with complex energy landscapes. After reviewing several of these specialized algorithms we shall describe the behavior of typical systems including spin glasses, lattice proteins, and models for "real" proteins. In the Edwards-Anderson spin glass it is now possible to produce probability distributions in the canonical ensemble and thermodynamic results of high numerical quality. In the hydrophobic-polar (HP) lattice protein model Wang-Landau sampling with an improved move set (pull-moves) produces results of very high quality. These can be compared with the results of other methods of statistical physics. A more realistic membrane protein model for Glycophorin A is also examined. Wang-Landau sampling allows the study of the dimerization process including an elucidation of the nature of the process.
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
Díez, A; Largo, J; Solana, J R
2006-08-21
Computer simulations have been performed for fluids with van der Waals potential, that is, hard spheres with attractive inverse power tails, to determine the equation of state and the excess energy. On the other hand, the first- and second-order perturbative contributions to the energy and the zero- and first-order perturbative contributions to the compressibility factor have been determined too from Monte Carlo simulations performed on the reference hard-sphere system. The aim was to test the reliability of this "exact" perturbation theory. It has been found that the results obtained from the Monte Carlo perturbation theory for these two thermodynamic properties agree well with the direct Monte Carlo simulations. Moreover, it has been found that results from the Barker-Henderson [J. Chem. Phys. 47, 2856 (1967)] perturbation theory are in good agreement with those from the exact perturbation theory.
Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina
NASA Astrophysics Data System (ADS)
Chen, Xiaoyan; Lane, Stephen
2010-02-01
We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.
Multicanonical Monte Carlo for Simulation of Optical Links
NASA Astrophysics Data System (ADS)
Bononi, Alberto; Rusch, Leslie A.
Multicanonical Monte Carlo (MMC) is a simulation-acceleration technique for the estimation of the statistical distribution of a desired system output variable, given the known distribution of the system input variables. MMC, similarly to the powerful and well-studied method of importance sampling (IS) [1], is a useful method to efficiently simulate events occurring with probabilities smaller than ˜ 10 - 6, such as bit error rate (BER) and system outage probability. Modern telecommunications systems often employ forward error correcting (FEC) codes that allow pre-decoded channel error rates higher than 10 - 3; these systems are well served by traditional Monte-Carlo error counting. MMC and IS are, nonetheless, fundamental tools to both understand the statistics of the decision variable (as well as of any physical parameter of interest) and to validate any analytical or semianalytical BER calculation model. Several examples of such use will be provided in this chapter. As a case in point, outage probabilities are routinely below 10 - 6, a sweet spot where MMC and IS provide the most efficient (sometimes the only) solution to estimate outages.
Leblanc, M D; Whitehead, J P; Plumer, M L
2013-05-15
A combination of Metropolis and modified Wolff cluster algorithms is used to examine the impact of uniaxial single-ion anisotropy on the phase transition to ferromagnetic order of Heisenberg macrospins on a 2D square lattice. This forms the basis of a model for granular perpendicular recording media where macrospins represent the magnetic moment of grains. The focus of this work is on the interplay between anisotropy D, intragrain exchange J' and intergrain exchange J on the ordering temperature T(C) and extends our previous reported analysis of the granular Ising model. The role of intragrain degrees of freedom in heat assisted magnetic recording is discussed.
NASA Astrophysics Data System (ADS)
Leblanc, M. D.; Whitehead, J. P.; Plumer, M. L.
2013-05-01
A combination of Metropolis and modified Wolff cluster algorithms is used to examine the impact of uniaxial single-ion anisotropy on the phase transition to ferromagnetic order of Heisenberg macrospins on a 2D square lattice. This forms the basis of a model for granular perpendicular recording media where macrospins represent the magnetic moment of grains. The focus of this work is on the interplay between anisotropy D, intragrain exchange J‧ and intergrain exchange J on the ordering temperature TC and extends our previous reported analysis of the granular Ising model. The role of intragrain degrees of freedom in heat assisted magnetic recording is discussed.
Monte Carlo Simulations for Mine Detection
Toor, A.; Marchetti, A.A.
2000-03-14
the system worked extremely well on all classes of anti-tank mines, the Russian hardware components were inferior to those that are commercially available in the United States, i.e. the NaI(Tl) crystals had significantly higher background levels and poorer resolution than their U.S. counterparts, the electronics appeared to be decades old and the photomultiplier tubes were noisy and lacked gain stabilization circuitry. During the evaluation of this technology, the question that came to mind was: could state-of-the-art sensors and electronics and improved software algorithms lead to a neutron based system that could reliably detect much smaller buried mines; namely antipersonnel mines containing 30-40 grams of high explosive? Our goal in this study was to conduct Monte Carlo simulations to gain better understanding of both phases of the mine detection system and to develop an understanding for the system's overall capabilities and limitations. In addition, we examined possible extensions of this technology to see whether or not state-of-the-art improvements could lead to a reliable anti-personnel mine detection system.
NASA Astrophysics Data System (ADS)
Böhm, Michael C.; Schulte, Joachim; Utrera, Luis
Feynman path-integral quantum Monte Carlo (QMC) simulations and an analytic many-body approach are used to study the ground state properties of one-dimensional (1D) chains in the theoretical framework of model Hamiltonians of the Hubbard type. The QMC algorithm is employed to derive position-space quantities, while band structure properties are evaluated by combining QMC data with expressions derived in momentum (k) space. Bridging link between both representations is the quasi-chemical approximation (QCA). Electronic charge fluctuations <(Δn2i)> and the fluctuations of the magnetic local moments <(Δs2i)> are studied as a function of the on-site density
Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.
ERIC Educational Resources Information Center
Oulman, Charles S.; Lee, Motoko Y.
Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…
ERIC Educational Resources Information Center
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
ERIC Educational Resources Information Center
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Campbell, L.; Purviance, J.
1992-01-01
A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Chan, C H; Rikvold, P A
2015-01-01
The Ziff-Gulari-Barshad (ZGB) model, a simplified description of the oxidation of carbon monoxide (CO) on a catalyst surface, is widely used to study properties of nonequilibrium phase transitions. In particular, it exhibits a nonequilibrium, discontinuous transition between a reactive and a CO poisoned phase. If one allows a nonzero rate of CO desorption (k), the line of phase transitions terminates at a critical point (k(c)). In this work, instead of restricting the CO and atomic oxygen (O) to react to form carbon dioxide (CO(2)) only when they are adsorbed in close proximity, we consider a modified model that includes an adjustable probability for adsorbed CO and O atoms located far apart on the lattice to react. We employ large-scale Monte Carlo simulations for system sizes up to 240×240 lattice sites, using the crossing of fourth-order cumulants to study the critical properties of this system. We find that the nonequilibrium critical point changes from the two-dimensional Ising universality class to the mean-field universality class upon introducing even a weak long-range reactivity mechanism. This conclusion is supported by measurements of cumulant fixed-point values, cluster percolation probabilities, correlation-length finite-size scaling properties, and the critical exponent ratio β/ν. The observed behavior is consistent with that of the equilibrium Ising ferromagnet with additional weak long-range interactions [T. Nakada, P. A. Rikvold, T. Mori, M. Nishino, and S. Miyashita, Phys. Rev. B 84, 054433 (2011)]. The large system sizes and the use of fourth-order cumulants also enable determination with improved accuracy of the critical point of the original ZGB model with CO desorption.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Rikvold, P. A.
2015-01-01
The Ziff-Gulari-Barshad (ZGB) model, a simplified description of the oxidation of carbon monoxide (CO) on a catalyst surface, is widely used to study properties of nonequilibrium phase transitions. In particular, it exhibits a nonequilibrium, discontinuous transition between a reactive and a CO poisoned phase. If one allows a nonzero rate of CO desorption (k ), the line of phase transitions terminates at a critical point (kc). In this work, instead of restricting the CO and atomic oxygen (O) to react to form carbon dioxide (CO2) only when they are adsorbed in close proximity, we consider a modified model that includes an adjustable probability for adsorbed CO and O atoms located far apart on the lattice to react. We employ large-scale Monte Carlo simulations for system sizes up to 240 ×240 lattice sites, using the crossing of fourth-order cumulants to study the critical properties of this system. We find that the nonequilibrium critical point changes from the two-dimensional Ising universality class to the mean-field universality class upon introducing even a weak long-range reactivity mechanism. This conclusion is supported by measurements of cumulant fixed-point values, cluster percolation probabilities, correlation-length finite-size scaling properties, and the critical exponent ratio β /ν . The observed behavior is consistent with that of the equilibrium Ising ferromagnet with additional weak long-range interactions [T. Nakada, P. A. Rikvold, T. Mori, M. Nishino, and S. Miyashita, Phys. Rev. B 84, 054433 (2011), 10.1103/PhysRevB.84.054433]. The large system sizes and the use of fourth-order cumulants also enable determination with improved accuracy of the critical point of the original ZGB model with CO desorption.
Lodise, Thomas P; Kinzig-Schippers, Martina; Drusano, George L; Loos, Ulrich; Vogel, Friedrich; Bulitta, Jürgen; Hinder, Markus; Sörgel, Fritz
2008-06-01
Cefditoren is a broad-spectrum, oral cephalosporin that is highly active against clinically relevant respiratory tract pathogens, including multidrug-resistant Streptococcus pneumoniae. This study described its pharmacodynamic profile in plasma and epithelial lining fluid (ELF). Plasma and ELF pharmacokinetic data were obtained from 24 patients under fasting conditions. Cefditoren and urea concentrations were determined in plasma and bronchoalveolar lavage fluid by liquid chromatography-tandem mass spectrometry. Concentration-time profiles in plasma and ELF were modeled using a model with three disposition compartments and first-order absorption, elimination, and transfer. Pharmacokinetic parameters were identified in a population pharmacokinetic analysis (big nonparametric adaptive grid with adaptive gamma). Monte Carlo simulation (9,999 subjects) was performed with the ADAPT II program to estimate the probability of target attainment at which the free-cefditoren plasma concentrations (88%) protein binding and total ELF concentrations exceeded the MIC for 33% of the dosing interval for 400 mg cefditoren given orally every 12 h. After the Bayesian step, the overall fits of the model to the data were good, and plots of predicted versus observed concentrations for plasma and ELF showed slopes and intercepts very close to the ideal values of 1.0 and 0.0, respectively. In the plasma probability of target attainment analysis, the probability of achieving a time for which free, or unbound, plasma concentration exceeds the MIC of the organism for 33% of the dosing interval was <80% for a MIC of >0.06 mg/liter. Similar to plasma, the probability of achieving a time above the MIC of 33% was <80% for MIC of >0.06 mg/liter in ELF. Cefditoren was found to have a low probability of achieving a bacteriostatic effect against MICs of >0.06 mg/liter, which includes most S. pneumoniae isolates with intermediate susceptibility to penicillin, when given in the fasting state in both
A Monte Carlo investigation of the Hamiltonian mean field model
NASA Astrophysics Data System (ADS)
Pluchino, Alessandro; Andronico, Giuseppe; Rapisarda, Andrea
2005-04-01
We present a Monte Carlo numerical investigation of the Hamiltonian mean field (HMF) model. We begin by discussing canonical Metropolis Monte Carlo calculations, in order to check the caloric curve of the HMF model and study finite size effects. In the second part of the paper, we present numerical simulations obtained by means of a modified Monte Carlo procedure with the aim to test the stability of those states at minimum temperature and zero magnetization (homogeneous Quasi stationary states), which exist in the condensed phase of the model just below the critical point. For energy densities smaller than the limiting value U∼0.68, we find that these states are unstable confirming a recent result on the Vlasov stability analysis applied to the HMF model.
Monte Carlo simulation and dosimetric verification of radiotherapy beam modifiers
NASA Astrophysics Data System (ADS)
Spezi, E.; Lewis, D. G.; Smith, C. W.
2001-11-01
Monte Carlo simulation of beam modifiers such as physical wedges and compensating filters has been performed with a rectilinear voxel geometry module. A modified version of the EGS4/DOSXYZ code has been developed for this purpose. The new implementations have been validated against the BEAM Monte Carlo code using its standard component modules (CMs) in several geometrical conditions. No significant disagreements were found within the statistical errors of 0.5% for photons and 2% for electrons. The clinical applicability and flexibility of the new version of the code has been assessed through an extensive verification versus dosimetric data. Both Varian multi-leaf collimator (MLC) wedges and standard wedges have been simulated and compared against experiments for 6 MV photon beams and different field sizes. Good agreement was found between calculated and measured depth doses and lateral dose profiles along both wedged and unwedged directions for different depths and focus-to-surface distances. Furthermore, Monte Carlo-generated output factors for both open and wedged fields agreed with linac commissioning beam data within statistical uncertainties of the calculations (<3% at largest depths). Compensating filters of both low-density and high-density materials have also been successfully simulated. As a demonstration, a wax compensating filter with a complex three-dimensional concave and convex geometry has been modelled through a CT scan import. Calculated depth doses and lateral dose profiles for different field sizes agreed well with experiments. The code was used to investigate the performance of a commercial treatment planning system in designing compensators. Dose distributions in a heterogeneous water phantom emulating the head and neck region were calculated with the convolution-superposition method (pencil beam and collapsed cone implementations) and compared against those from the MC code developed herein. The new technique presented in this work is
Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model
D.P. Stotler
2005-06-09
The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.
Monte Carlo Simulation of Response Time for Velocity Modulation Transistors
NASA Astrophysics Data System (ADS)
Maezawa, Koichi; Mizutani, Takashi; Tomizawa, Masaaki
1992-03-01
We have studied the response time for velocity modulation transistors (VMTs) using particle Monte Carlo simulation. The intrinsic VMT model with zero gate-source spacing was used to avoid the change in total number of electrons due to the difference in source resistances between the two channels. The results show that the response time for VMTs is about half that for ordinary high electron mobility transistors (HEMTs). The remaining factor limiting the response time is the electron redistribution in the channel, which is shown to be caused by the difference in velocity-electric field characteristics in the two channels. A “virtual” VMT model with a single channel, where the impurity concentration is changed abruptly at a certain moment, has also been studied to clarify the effect of electron redistribution.
Residual entropy of ice III from Monte Carlo simulation.
Kolafa, Jiří
2016-03-28
We calculated the residual entropy of ice III as a function of the occupation probabilities of hydrogen positions α and β assuming equal energies of all configurations. To do this, a discrete ice model with Bjerrum defect energy penalty and harmonic terms to constrain the occupation probabilities was simulated by the Metropolis Monte Carlo method for a range of temperatures and sizes followed by thermodynamic integration and extrapolation to N = ∞. Similarly as for other ices, the residual entropies are slightly higher than the mean-field (no-loop) approximation. However, the corrections caused by fluctuation of energies of ice samples calculated using molecular models of water are too large for accurate determination of the chemical potential and phase equilibria.
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition
Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.
2008-05-01
A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient.
Monte Carlo simulation in statistical physics: an introduction
NASA Astrophysics Data System (ADS)
Binder, K., Heermann, D. W.
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.
Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.
2014-03-01
Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.
Monte Carlo simulation of the spear reflectometer at LANSCE
Smith, G.S.
1995-12-31
The Monte Carlo instrument simulation code, MCLIB, contains elements to represent several components found in neutron spectrometers including slits, choppers, detectors, sources and various samples. Using these elements to represent the components of a neutron scattering instrument, one can simulate, for example, an inelastic spectrometer, a small angle scattering machine, or a reflectometer. In order to benchmark the code, we chose to compare simulated data from the MCLIB code with an actual experiment performed on the SPEAR reflectometer at LANSCE. This was done by first fitting an actual SPEAR data set to obtain the model scattering-length-density profile, {Beta}(z), for the sample and the substrate. Then these parameters were used as input values for the sample scattering function. A simplified model of SPEAR was chosen which contained all of the essential components of the instrument. A code containing the MCLIB subroutines was then written to simulate this simplified instrument. The resulting data was then fit and compared to the actual data set in terms of the statistics, resolution and accuracy.
NOTE: Monte Carlo simulation of RapidArc radiotherapy delivery
NASA Astrophysics Data System (ADS)
Bush, K.; Townson, R.; Zavgorodni, S.
2008-10-01
RapidArc radiotherapy technology from Varian Medical Systems is one of the most complex delivery systems currently available, and achieves an entire intensity-modulated radiation therapy (IMRT) treatment in a single gantry rotation about the patient. Three dynamic parameters can be continuously varied to create IMRT dose distributions—the speed of rotation, beam shaping aperture and delivery dose rate. Modeling of RapidArc technology was incorporated within the existing Vancouver Island Monte Carlo (VIMC) system (Zavgorodni et al 2007 Radiother. Oncol. 84 S49, 2008 Proc. 16th Int. Conf. on Medical Physics). This process was named VIMC-Arc and has become an efficient framework for the verification of RapidArc treatment plans. VIMC-Arc is a fully automated system that constructs the Monte Carlo (MC) beam and patient models from a standard RapidArc DICOM dataset, simulates radiation transport, collects the resulting dose and converts the dose into DICOM format for import back into the treatment planning system (TPS). VIMC-Arc accommodates multiple arc IMRT deliveries and models gantry rotation as a series of segments with dynamic MLC motion within each segment. Several verification RapidArc plans were generated by the Eclipse TPS on a water-equivalent cylindrical phantom and re-calculated using VIMC-Arc. This includes one 'typical' RapidArc plan, one plan for dual arc treatment and one plan with 'avoidance' sectors. One RapidArc plan was also calculated on a DICOM patient CT dataset. Statistical uncertainty of MC simulations was kept within 1%. VIMC-Arc produced dose distributions that matched very closely to those calculated by the anisotropic analytical algorithm (AAA) that is used in Eclipse. All plans also demonstrated better than 1% agreement of the dose at the isocenter. This demonstrates the capabilities of our new MC system to model all dosimetric features required for RapidArc dose calculations.
Monte Carlo simulation of RapidArc radiotherapy delivery.
Bush, K; Townson, R; Zavgorodni, S
2008-10-07
RapidArc radiotherapy technology from Varian Medical Systems is one of the most complex delivery systems currently available, and achieves an entire intensity-modulated radiation therapy (IMRT) treatment in a single gantry rotation about the patient. Three dynamic parameters can be continuously varied to create IMRT dose distributions-the speed of rotation, beam shaping aperture and delivery dose rate. Modeling of RapidArc technology was incorporated within the existing Vancouver Island Monte Carlo (VIMC) system (Zavgorodni et al 2007 Radiother. Oncol. 84 S49, 2008 Proc. 16th Int. Conf. on Medical Physics). This process was named VIMC-Arc and has become an efficient framework for the verification of RapidArc treatment plans. VIMC-Arc is a fully automated system that constructs the Monte Carlo (MC) beam and patient models from a standard RapidArc DICOM dataset, simulates radiation transport, collects the resulting dose and converts the dose into DICOM format for import back into the treatment planning system (TPS). VIMC-Arc accommodates multiple arc IMRT deliveries and models gantry rotation as a series of segments with dynamic MLC motion within each segment. Several verification RapidArc plans were generated by the Eclipse TPS on a water-equivalent cylindrical phantom and re-calculated using VIMC-Arc. This includes one 'typical' RapidArc plan, one plan for dual arc treatment and one plan with 'avoidance' sectors. One RapidArc plan was also calculated on a DICOM patient CT dataset. Statistical uncertainty of MC simulations was kept within 1%. VIMC-Arc produced dose distributions that matched very closely to those calculated by the anisotropic analytical algorithm (AAA) that is used in Eclipse. All plans also demonstrated better than 1% agreement of the dose at the isocenter. This demonstrates the capabilities of our new MC system to model all dosimetric features required for RapidArc dose calculations.
NASA Astrophysics Data System (ADS)
Sharma, Anupam; Long, Lyle N.
2004-10-01
A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.
Monte Carlo simulation of radiation streaming from a radioactive material shipping cask
Liu, Y.Y.; Schwarz, R.A.; Tang, J.S.
1996-04-01
Simulated detection of gamma radiation streaming from a radioactive material shipping cask have been performed with the Monte Carlo codes MCNP4A and MORSE-SGC/S. Despite inherent difficulties in simulating deep penetration of radiation and streaming, the simulations have yielded results that agree within one order of magnitude with the radiation survey data, with reasonable statistics. These simulations have also provided insight into modeling radiation detection, notably on location and orientation of the radiation detector with respect to photon streaming paths, and on techniques used to reduce variance in the Monte Carlo calculations. 13 refs., 4 figs., 2 tabs.
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
Monte Carlo simulations using infrared improved DGLAP-CS theory
NASA Astrophysics Data System (ADS)
Joseph, Samuel J.
A large number of Z and W bosons will be produced at the LHC. A careful study of their properties in the presence of QCD background processes, will be important in studying the Standard Model more rigorously and to uncover new physics which may appear through radiative corrections or through new tree level processes with suppressed couplings. In order to reach the 1% attendant theoretical precision tag on processes such as single Z and W production, more precise Monte Carlos need to be developed. As a step towards this goal a new set of infrared (ir) improved DGLAP-CS kernels was developed by Ward. For this work we implemented these infrared improved kernels in HERWIG6.5 to create a new program HERWIRI1.0. We discuss the phenomological implications of our new Monte Carlo HERWIRI1.0. Specifically we compared pp → 2-jets + X and pp → Z/gamma* + X → ℓ+ℓ- + X', with ℓ = e, mu, results obtained by HERWIG6.5 and HERWIRI1.0. The three main quantities that we compared were the pt, energy fraction and rapidity distributions. We made these comparisons at s = 14 TeV, the highest LHC energies. Comparisons were also made for pi + production in pp → 2-jets + X at this energy. As expected, the IR-improved spectra were generally softer. As a test of HERWIRI1.0 a comparison of the pt and rapidity distribution data from FNAL at s = 1.96 TeV for the process pp¯ → Z/gamma* → e+e - + X was made. We found that the softer part of these observed spectra were better described by HERWIRI1.0. This represents a new chapter in precision Monte Carlo simulations for hadron-hadron high energy collisions because the IR-improved kernels do not require an explicit cut-off.
Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark
2012-02-01
We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.
Lindoy, Lachlan P.; Kolmann, Stephen J.; D’Arcy, Jordan H.; Jordan, Meredith J. T.; Crittenden, Deborah L.
2015-11-21
Finite temperature quantum and anharmonic effects are studied in H{sub 2}–Li{sup +}-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H{sub 2}. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H{sub 2} molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔU{sub ads}, and enthalpy, ΔH{sub ads}, for H{sub 2} adsorption onto Li{sup +}-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H{sub 2}–Li{sup +}-benzene are the “helicopter” and “ferris wheel” H{sub 2} rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔU{sub ads} and ΔH{sub ads} are −13.3 ± 0.1 and −14.5 ± 0.1 kJ mol{sup −1}, respectively.
Treatment planning for a small animal using Monte Carlo simulation
Chow, James C. L.; Leung, Michael K. K.
2007-12-15
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Treatment planning for a small animal using Monte Carlo simulation.
Chow, James C L; Leung, Michael K K
2007-12-01
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
A generic algorithm for Monte Carlo simulation of proton transport
NASA Astrophysics Data System (ADS)
Salvat, Francesc
2013-12-01
A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.
Learning About Ares I from Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Hanson, John M.; Hall, Charlie E.
2008-01-01
This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.
Magnetic properties for cobalt nanorings: Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Ye, Qingying; Chen, Shuiyuan; Zhong, Kehua; Huang, Zhigao
2012-02-01
In this paper, two structure models of cobalt nanoring cells (double-nanorings and four-nanorings, named as D-rings and F-rings, respectively) have been considered. Base on Monte Carlo simulation, the magnetic properties of the D-rings and F-rings, such as hysteresis loops, spin configuration, coercivity, etc., have been studied. The simulated results indicate that both D-rings and F-rings with different inner radius ( r) and separation of ring centers ( d) display interesting magnetization behavior and spin configurations (onion-, vortex- and crescent shape vortex-type states) in magnetization process. Moreover, it is found that the overlap between the nearest single nanorings connect can result in the deviation of the vortex-type states in the connected regions. Therefore, the appropriate d should be well considered in the design of nanoring device. The simulated results can be explained by the competition between exchange energy and dipolar energy in Co nanorings system. Furthermore, it is found that the simulated temperature dependence of the coercivity for the D-rings with different d can be well described by Hc= H0 exp[-( T/ T0) p].
NASA Astrophysics Data System (ADS)
Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.
2016-03-01
Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.
Monte Carlo Simulations of Background Spectra in Integral Imager Detectors
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.
1998-01-01
Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.
Monte Carlo Simulations of Background Spectra in Integral Imager Detectors
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.
1998-01-01
Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.
Acceleration of Markov chain Monte Carlo simulations through sequential updating
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2006-02-01
Strict detailed balance is not necessary for Markov chain Monte Carlo simulations to converge to the correct equilibrium distribution. In this work, we propose a new algorithm which only satisfies the weaker balance condition, and it is shown analytically to have better mobility over the phase space than the Metropolis algorithm satisfying strict detailed balance. The new algorithm employs sequential updating and yields better sampling statistics than the Metropolis algorithm with random updating. We illustrate the efficiency of the new algorithm on the two-dimensional Ising model. The algorithm is shown to identify the correct equilibrium distribution and to converge faster than the Metropolis algorithm with strict detailed balance. The main advantages of the new algorithm are its simplicity and the feasibility of parallel implementation through domain decomposition.
Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.
Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G
2010-04-16
We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.
Micrometeoroid abrasion of lunar rocks - A Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Hoerz, F.; Schneider, E.; Hill, R. E.
1974-01-01
A Monte Carlo computer model simulating the randomness of the impact process both in space and in time is developed in order to provide insight into lunar rock erosion by single particle abrasion and into bombardment history of fractional surface areas of lunar rocks. Microcrater frequencies derived from lunar rocks are used to calculate magnitude and probability of each cratering event, and experimental cratering results are employed to determine the eroded volumina for individual crater sizes. It is shown that a fractional surface area of a lunar rock sample may have a completely different bombardment history, and that the exposure histories and actual erosion depths of the surfaces vary accordingly and are highly heterogeneous. A minimum erosion rate of 0.3 to 0.6 mm for the past one million years is obtained.
Monte Carlo simulation to analyze the performance of CPV modules
NASA Astrophysics Data System (ADS)
Herrero, Rebeca; Antón, Ignacio; Sala, Gabriel; De Nardis, Davide; Araki, Kenji; Yamaguchi, Masafumi
2017-09-01
A model to evaluate the performance of high concentrator photovoltaics (HCPV) modules (that generates current-voltage curves) has been applied together with a Monte Carlo approach to obtain a distribution of modules with a given set of characteristics (e.g., receivers electrical properties and misalignments within elementary units in modules) related to a manufacturing scenario. In this paper, the performance of CPV systems (tracker and inverter) that contain the set of simulated modules is evaluated depending on different system characteristics: inverter configuration, sorting of modules and bending of the tracker frame. Thus, the study of the HCPV technology regarding its angular constrains is fully covered by analyzing all the possible elements affecting the generated electrical power.
Monte Carlo simulation of laser backscatter from sea water
NASA Astrophysics Data System (ADS)
Koerber, B. W.; Phillips, D. M.
1982-01-01
A Monte Carlo simulation study of laser backscatter from sea water has been carried out to provide data required to assess the feasibility of measuring inherent optical propagation properties of sea water from an aircraft. The possibility was examined of deriving such information from the backscatter component of the return signals measured by the WRELADS laser airborne depth sounder system. Computations were made for various water turbidity conditions and for different fields of view of the WRELADS receiver. Using a simple model fitted to the computed backscatter data, it was shown that values of the scattering data absorption coefficients can be derived from the initial amplitude and the decay rate of the backscatter envelope.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Yeh, Chun-Hung; Schmitt, Benoît; Le Bihan, Denis; Li-Schlittgen, Jing-Rebecca; Lin, Ching-Po; Poupon, Cyril
2013-01-01
This article describes the development and application of an integrated, generalized, and efficient Monte Carlo simulation system for diffusion magnetic resonance imaging (dMRI), named Diffusion Microscopist Simulator (DMS). DMS comprises a random walk Monte Carlo simulator and an MR image synthesizer. The former has the capacity to perform large-scale simulations of Brownian dynamics in the virtual environments of neural tissues at various levels of complexity, and the latter is flexible enough to synthesize dMRI datasets from a variety of simulated MRI pulse sequences. The aims of DMS are to give insights into the link between the fundamental diffusion process in biological tissues and the features observed in dMRI, as well as to provide appropriate ground-truth information for the development, optimization, and validation of dMRI acquisition schemes for different applications. The validity, efficiency, and potential applications of DMS are evaluated through four benchmark experiments, including the simulated dMRI of white matter fibers, the multiple scattering diffusion imaging, the biophysical modeling of polar cell membranes, and the high angular resolution diffusion imaging and fiber tractography of complex fiber configurations. We expect that this novel software tool would be substantially advantageous to clarify the interrelationship between dMRI and the microscopic characteristics of brain tissues, and to advance the biophysical modeling and the dMRI methodologies.
Parallel Monte Carlo Simulation for control system design
NASA Technical Reports Server (NTRS)
Schubert, Wolfgang M.
1995-01-01
The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.
Progress report for the Monte-Carlo gamma-ray spectrum simulation program BSIMUL
NASA Technical Reports Server (NTRS)
Haywood, S. E.; Rester, A. C., Jr.
1996-01-01
The progress made during 1995 on the Monte-Carlo gamma-ray spectrum simulation program BSIMUL is discussed. Several features have been added, including the ability to model shields that are tapered cylinders. Several simulations were made on the Near Earth Asteroid Rendezvous detector.
Relation between gamma-ray family and EAS core: Monte-Carlo simulation of EAS core
NASA Technical Reports Server (NTRS)
Yanagita, T.
1985-01-01
Preliminary results of Monte-Carlo simulation on Extensive Air Showers (EAS) (Ne=100,000) core is reported. For the first collision at the top of the atmosphere, high multiplicity (high rapidity, density) and a large Pt (1.5GeV average) model is assumed. Most of the simulated cores show a complicated structure.
Catfish: A Monte Carlo simulator for black holes at the LHC
NASA Astrophysics Data System (ADS)
Cavaglià, M.; Godang, R.; Cremaldi, L.; Summers, D.
2007-09-01
We present a new Fortran Monte Carlo generator to simulate black hole events at CERN's Large Hadron Collider. The generator interfaces to the PYTHIA Monte Carlo fragmentation code. The physics of the BH generator includes, but not limited to, inelasticity effects, exact field emissivities, corrections to semiclassical black hole evaporation and gravitational energy loss at formation. These features are essential to realistically reconstruct the detector response and test different models of black hole formation and decay at the LHC.
Monte Carlo Simulation of Callisto's Exosphere
NASA Astrophysics Data System (ADS)
Vorburger, Audrey; Wurz, Peter; Galli, André; Mousis, Olivier; Barabash, Stas; Lammer, Helmut
2014-05-01
to the surface the sublimated particles dominate the day-side exosphere, however, their density profiles (with the exception of H and H2) decrease much more rapidly with altitude than those of the sputtered particles, thus, the latter particles start to dominate at altitudes above ~1000 km. Since the JUICE flybys are as low as 200 km above Callisto's surface, NIM is expected to register both the sublimated as well as sputtered particle populations. Our simulations show that NIM's sensitivity is high enough to allow the detection of particles sputtered from the icy as well as the mineral surfaces, and to distinguish between the different composition models.
Liu, Zhirong; Chan, Hue Sun
2008-04-14
We develop two classes of Monte Carlo moves for efficient sampling of wormlike DNA chains that can have significant degrees of supercoiling, a conformational feature that is key to many aspects of biological function including replication, transcription, and recombination. One class of moves entails reversing the coordinates of a segment of the chain along one, two, or three axes of an appropriately chosen local frame of reference. These transformations may be viewed as a generalization, to the continuum, of the Madras-Orlitsky-Shepp algorithm for cubic lattices. Another class of moves, termed T+/-2, allows for interconversions between chains with different lengths by adding or subtracting two beads (monomer units) to or from the chain. Length-changing moves are generally useful for conformational sampling with a given site juxtaposition, as has been shown in previous lattice studies. Here, the continuum T+/-2 moves are designed to enhance their acceptance rate in supercoiled conformations. We apply these moves to a wormlike model in which excluded volume is accounted for by a bond-bond repulsion term. The computed autocorrelation functions for the relaxation of bond length, bond angle, writhe, and branch number indicate that the new moves lead to significantly more efficient sampling than conventional bead displacements and crankshaft rotations. A close correspondence is found in the equilibrium ensemble between the map of writhe computed for pair of chain segments and the map of site juxtapositions or self-contacts. To evaluate the more coarse-grained freely jointed chain (random-flight) and cubic lattice models that are commonly used in DNA investigations, twisting (torsional) potentials are introduced into these models. Conformational properties for a given superhelical density sigma may then be sampled by computing the writhe and using White's formula to relate the degree of twisting to writhe and sigma. Extensive comparisons of contact patterns and knot
Heterogeneous multiscale Monte Carlo simulations for gold nanoparticle radiosensitization.
Martinov, Martin P; Thomson, Rowan M
2017-02-01
To introduce the heterogeneous multiscale (HetMS) model for Monte Carlo simulations of gold nanoparticle dose-enhanced radiation therapy (GNPT), a model characterized by its varying levels of detail on different length scales within a single phantom; to apply the HetMS model in two different scenarios relevant for GNPT and to compare computed results with others published. The HetMS model is implemented using an extended version of the EGSnrc user-code egs_chamber; the extended code is tested and verified via comparisons with recently published data from independent GNP simulations. Two distinct scenarios for the HetMS model are then considered: (a) monoenergetic photon beams (20 keV to 1 MeV) incident on a cylinder (1 cm radius, 3 cm length); (b) isotropic point source (brachytherapy source spectra) at the center of a 2.5 cm radius sphere with gold nanoparticles (GNPs) diffusing outwards from the center. Dose enhancement factors (DEFs) are compared for different source energies, depths in phantom, gold concentrations, GNP sizes, and modeling assumptions, as well as with independently published values. Simulation effici