Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations
NASA Astrophysics Data System (ADS)
Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa
2017-05-01
We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.
Numerical Simulation on a Possible Formation Mechanism of Interplanetary Magnetic Cloud Boundaries
NASA Astrophysics Data System (ADS)
Fan, Quan-Lin; Wei, Feng-Si; Feng, Xue-Shang
2003-08-01
The formation mechanism of the interplanetary magnetic cloud (MC) boundaries is numerically investigated by simulating the interactions between an MC of some initial momentum and a local interplanetary current sheet. The compressible 2.5D MHD equations are solved. Results show that the magnetic reconnection process is a possible formation mechanism when an MC interacts with a surrounding current sheet. A number of interesting features are found. For instance, the front boundary of the MCs is a magnetic reconnection boundary that could be caused by a driven reconnection ahead of the cloud, and the tail boundary might be caused by the driving of the entrained flow as a result of the Bernoulli principle. Analysis of the magnetic field and plasma data demonstrates that at these two boundaries appear large value of the plasma parameter β, clear increase of plasma temperature and density, distinct decrease of magnetic magnitude, and a transition of magnetic field direction of about 180 degrees. The outcome of the present simulation agrees qualitatively with the observational results on MC boundary inferred from IMP-8, etc. The project supported by National Natural Science Foundation of China under Grant Nos. 40104006, 49925412, and 49990450
Validation of Shielding Analysis Capability of SuperMC with SINBAD
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing
2017-09-01
Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya
2015-03-01
Quantification of the optical properties of the tissues and blood by noninvasive photoacoustic (PA) imaging may provide useful information for screening and early diagnosis of diseases. Linearized 2D image reconstruction algorithm based on PA wave equation and the photon diffusion equation (PDE) can reconstruct the image with computational cost smaller than a method based on 3D radiative transfer equation. However, the reconstructed image is affected by the differences between the actual and assumed light propagations. A quantitative capability of a linearized 2D image reconstruction was investigated and discussed by the numerical simulations and the phantom experiment in this study. The numerical simulations with the 3D Monte Carlo (MC) simulation and the 2D finite element calculation of the PDE were carried out. The phantom experiment was also conducted. In the phantom experiment, the PA pressures were acquired by a probe which had an optical fiber for illumination and the ring shaped P(VDF-TrFE) ultrasound transducer. The measured object was made of Intralipid and Indocyanine green. In the numerical simulations, it was shown that the linearized image reconstruction method recovered the absorption coefficients with alleviating the dependency of the PA amplitude on the depth of the photon absorber. The linearized image reconstruction method worked effectively under the light propagation calculated by 3D MC simulation, although some errors occurred. The phantom experiments validated the result of the numerical simulations.
Simulation of substrate degradation in composting of sewage sludge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Jun; Gao Ding, E-mail: gaod@igsnrr.ac.c; Chen Tongbin
2010-10-15
To simulate the substrate degradation kinetics of the composting process, this paper develops a mathematical model with a first-order reaction assumption and heat/mass balance equations. A pilot-scale composting test with a mixture of sewage sludge and wheat straw was conducted in an insulated reactor. The BVS (biodegradable volatile solids) degradation process, matrix mass, MC (moisture content), DM (dry matter) and VS (volatile solid) were simulated numerically by the model and experimental data. The numerical simulation offered a method for simulating k (the first-order rate constant) and estimating k{sub 20} (the first-order rate constant at 20 {sup o}C). After comparison withmore » experimental values, the relative error of the simulation value of the mass of the compost at maturity was 0.22%, MC 2.9%, DM 4.9% and VS 5.2%, which mean that the simulation is a good fit. The k of sewage sludge was simulated, and k{sub 20}, k{sub 20s} (first-order rate coefficient of slow fraction of BVS at 20 {sup o}C) of the sewage sludge were estimated as 0.082 and 0.015 d{sup -1}, respectively.« less
Fundamental Experimental and Numerical Investigation of Active Control of 3-D Flows
2011-10-06
Unmanned Aerial Vehicle”, AIAA Journal, 46, 2530- 2544. Gallas , Q., Holman, R., Nishida, T., Carroll, B., Sheplak, M. and Cattafesta, L., 2003, “Lumped...McGraw-Hill, 1959. Trofimova, A.V., Tejada- Martinez , A.E., Jansen, K.E. and Lahey, R.T., 2009, “Direct Numerical Simulation of Turbulent Channel Flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Sahinidis, Nikolaos V.
2013-03-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
NASA Astrophysics Data System (ADS)
Kredler, L.; Häußler, W.; Martin, N.; Böni, P.
The flux is still a major limiting factor in neutron research. For instruments being supplied by cold neutrons using neutron guides, both at present steady-state and at new spallation neutron sources, it is therefore important to optimize the instrumental setup and the neutron guidance. Optimization of neutron guide geometry and of the instrument itself can be performed by numerical ray-tracing simulations using existing open-access codes. In this paper, we discuss how such Monte Carlo simulations have been employed in order to plan improvements of the Neutron Resonant Spin Echo spectrometer RESEDA (FRM II, Germany) as well as the neutron guides before and within the instrument. The essential components have been represented with the help of the McStas ray-tracing package. The expected intensity has been tested by means of several virtual detectors, implemented in the simulation code. Comparison between simulations and preliminary measurements results shows good agreement and demonstrates the reliability of the numerical approach. These results will be taken into account in the planning of new components installed in the guide system.
MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pater, P; Vallieres, M; Seuntjens, J
2014-06-15
Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dosemore » deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)« less
New features in McStas, version 1.5
NASA Astrophysics Data System (ADS)
Åstrand, P.-O.; Lefmann, K.; Farhi, E.; Nielsen, K.; Skårup, P.
The neutron ray-tracing simulation package McStas has attracted numerous users, and the development of the package continues with version 1.5 released at the ICNS 2001 conference. New features include: support for neutron polarisation, labelling of neutrons, realistic source and sample components, and interface to the Riso instrument-control software TASCOM. We give a general introduction to McStas and present the latest developments. In particular, we give an example of how the neutron-label option has been used to locate the origin of a spurious side-peak, observed in an experiment with RITA-1 at Riso.
Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo
2013-01-01
Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson’s equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both of the mean-field theory and MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling. PMID:22680474
Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo
2012-04-01
Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect-included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson's equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both the mean-field theory and the MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling.
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An
2017-11-08
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Gao, Lili
2017-01-01
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096
Schwarzschild-de Sitter spacetimes, McVittie coordinates, and trumpet geometries
NASA Astrophysics Data System (ADS)
Dennison, Kenneth A.; Baumgarte, Thomas W.
2017-12-01
Trumpet geometries play an important role in numerical simulations of black hole spacetimes, which are usually performed under the assumption of asymptotic flatness. Our Universe is not asymptotically flat, however, which has motivated numerical studies of black holes in asymptotically de Sitter spacetimes. We derive analytical expressions for trumpet geometries in Schwarzschild-de Sitter spacetimes by first generalizing the static maximal trumpet slicing of the Schwarzschild spacetime to static constant mean curvature trumpet slicings of Schwarzschild-de Sitter spacetimes. We then switch to a comoving isotropic radial coordinate which results in a coordinate system analogous to McVittie coordinates. At large distances from the black hole the resulting metric asymptotes to a Friedmann-Lemaître-Robertson-Walker metric with an exponentially-expanding scale factor. While McVittie coordinates have another asymptotically de Sitter end as the radial coordinate goes to zero, so that they generalize the notion of a "wormhole" geometry, our new coordinates approach a horizon-penetrating trumpet geometry in the same limit. Our analytical expressions clarify the role of time-dependence, boundary conditions and coordinate conditions for trumpet slices in a cosmological context, and provide a useful test for black hole simulations in asymptotically de Sitter spacetimes.
LES of Temporally Evolving Mixing Layers by an Eighth-Order Filter Scheme
NASA Technical Reports Server (NTRS)
Hadjadj, A; Yee, H. C.; Sjogreen, B.
2011-01-01
An eighth-order filter method for a wide range of compressible flow speeds (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) are employed for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) and Reynolds numbers. The high order filter method is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The value of Mc considered is for the TML range from the quasi-incompressible regime to the highly compressible supersonic regime. The three main characteristics of compressible TML (the self similarity property, compressibility effects and the presence of large-scale structure with shocklets for high Mc) are considered for the LES study. The LES results using the same scheme parameters for all studied cases agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002).
Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.
Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow
NASA Astrophysics Data System (ADS)
Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca
2017-11-01
The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test
NASA Astrophysics Data System (ADS)
Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.
We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.
Numerical Simulation of Chemically Reacting Flows
2015-09-03
62 (1986) 1-25. 6. O.L. Burchett, M.R. Birnbaum, and C.T. Oien, “ Compaction studies of palladium/aluminum powder ,” Sandia National Laboratories...interest to the Air Force. 15. SUBJECT TERMS Numerical methods, Diffusion Flames, Adaptive Gridding, Velocity-Vorticity, Compact Methods 16...discussed ab ot require th sure mass c mputational city formula e spectrum soot forma formulation lent agreem ing MC-Sm ork will lik ith compact iled
Surface tension of undercooled liquid cobalt
NASA Astrophysics Data System (ADS)
Yao, W. J.; Han, X. J.; Chen, M.; Wei, B.; Guo, Z. Y.
2002-08-01
This paper provides the results on experimentally measured and numerically predicted surface tensions of undercooled liquid cobalt. The experiments were performed by using the oscillation drop technique combined with electromagnetic levitation. The simulations are carried out with the Monte Carlo (MC) method, where the surface tension is predicted through calculations of the work of cohesion, and the interatomic interaction is described with an embedded-atom method. The maximum undercooling of the liquid cobalt is reached at 231 K (0.13Tm) in the experiment and 268 K (0.17Tm) in the simulation. The surface tension and its relationship with temperature obtained in the experiment and simulation are σexp = 1.93 - 0.000 33 (T - T m) N m-1 and σcal = 2.26 - 0.000 32 (T - T m) N m-1 respectively. The temperature dependence of the surface tension calculated from the MC simulation is in reasonable agreement with that measured in the experiment.
NASA Astrophysics Data System (ADS)
Korayem, A. H.; Abdi, M.; Korayem, M. H.
2018-06-01
The surface topography in nanoscale is one of the most important applications of AFM. The analysis of piezoelectric microcantilevers vibration behavior is essential to improve the AFM performance. To this end, one of the appropriate methods to simulate the dynamic behavior of microcantilever (MC) is a numerical solution with FEM in the 3D modeling using COMSOL software. The present study aims to simulate different geometries of the four-layered AFM piezoelectric MCs in 2D and 3D modeling in a liquid medium using COMSOL software. The 3D simulation was done in a spherical container using FSI domain in COMSOL. In 2D modeling by applying Hamilton's Principle based on Euler-Bernoulli Beam theory, the governing motion equation was derived and discretized with FEM. In this mode, the hydrodynamic force was assumed with a string of spheres. The effect of this force along with the squeezed-film force was considered on MC equations. The effect of fluid density and viscosity on the MC vibrations that immersed in different glycerin solutions was investigated in 2D and 3D modes and the results were compared with the experimental results. The frequencies and time responses of MC close to the surface were obtained considering tip-sample forces. The surface topography of MCs different geometries were compared in the liquid medium and the comparison was done in both tapping and non-contact mode. Various types of surface roughness were considered in the topography for MC different geometries. Also, the effect of geometric dimensions on the surface topography was investigated. In liquid medium, MC is installed at an oblique position to avoid damaging the MC due to the squeezed-film force in the vicinity of MC surface. Finally, the effect of MC's angle on surface topography and time response of the system was investigated.
NASA Astrophysics Data System (ADS)
Isobe, Masaharu
Hard sphere/disk systems are among the simplest models and have been used to address numerous fundamental problems in the field of statistical physics. The pioneering numerical works on the solid-fluid phase transition based on Monte Carlo (MC) and molecular dynamics (MD) methods published in 1957 represent historical milestones, which have had a significant influence on the development of computer algorithms and novel tools to obtain physical insights. This chapter addresses the works of Alder's breakthrough regarding hard sphere/disk simulation: (i) event-driven molecular dynamics, (ii) long-time tail, (iii) molasses tail, and (iv) two-dimensional melting/crystallization. From a numerical viewpoint, there are serious issues that must be overcome for further breakthrough. Here, we present a brief review of recent progress in this area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Ellis; Derek Gaston; Benoit Forget
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less
Physical, chemical and biological properties of simulated beef cattle bedded manure packs
USDA-ARS?s Scientific Manuscript database
Manure including bedding material can be a valuable fertilizer, yet numerous, poorly characterized, environmental factors control its quality. The objective was to determine whether moisture content (MC), nutrient value (ammonium nitrogen (NH4-N), total nitrogen (TN), total phosphorus (TP), total po...
Next-generation acceleration and code optimization for light transport in turbid media using GPUs
Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar
2010-01-01
A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320
High Frequency Bottom Interaction in Range Dependent Biot Media
1999-09-30
acoust . Soc. Am. Stephen, R.A. Benchmark models for propagation and scattering in Biot media. Fall ASA, Norfolk, VA, October...1998, J. Acoust . Soc. Am., 104, 1808. X. Zhu and G. A. McMechan, “Numerical simulation of seismic responses of poroelastic reservoirs using Biot...reverberation from rough and heterogeneous seafloors. J. acoust . Soc. Am. Stephen, R.A., in press. Optimum and standard beam widths for numerical modeling of interface scattering problems. J. acoust . Soc. Am.
Thomson, R; Kawrakow, I
2012-06-01
Widely-used classical trajectory Monte Carlo simulations of low energy electron transport neglect the quantum nature of electrons; however, at sub-1 keV energies quantum effects have the potential to become significant. This work compares quantum and classical simulations within a simplified model of electron transport in water. Electron transport is modeled in water droplets using quantum mechanical (QM) and classical trajectory Monte Carlo (MC) methods. Water droplets are modeled as collections of point scatterers representing water molecules from which electrons may be isotropically scattered. The role of inelastic scattering is investigated by introducing absorption. QM calculations involve numerically solving a system of coupled equations for the electron wavefield incident on each scatterer. A minimum distance between scatterers is introduced to approximate structured water. The average QM water droplet incoherent cross section is compared with the MC cross section; a relative error (RE) on the MC results is computed. RE varies with electron energy, average and minimum distances between scatterers, and scattering amplitude. The mean free path is generally the relevant length scale for estimating RE. The introduction of a minimum distance between scatterers increases RE substantially (factors of 5 to 10), suggesting that the structure of water must be modeled for accurate simulations. Inelastic scattering does not improve agreement between QM and MC simulations: for the same magnitude of elastic scattering, the introduction of inelastic scattering increases RE. Droplet cross sections are sensitive to droplet size and shape; considerable variations in RE are observed with changing droplet size and shape. At sub-1 keV energies, quantum effects may become non-negligible for electron transport in condensed media. Electron transport is strongly affected by the structure of the medium. Inelastic scatter does not improve agreement between QM and MC simulations of low energy electron transport in condensed media. © 2012 American Association of Physicists in Medicine.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less
Modelling Accuracy of a Car Steering Mechanism with Rack and Pinion and McPherson Suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-08-01
Modelling accuracy of a car steering mechanism with a rack and pinion and McPherson suspension is analyzed. Geometrical parameters of the model are described by using the coordinates of centers of spherical joints, directional unit vectors and axis points of revolute, cylindrical and prismatic joints. Modelling accuracy is assumed as the differences between the values of the wheel knuckle position and orientation coordinates obtained using a simulation model and the corresponding measured values. The sensitivity analysis of the parameters on the model accuracy is illustrated by two numerical examples.
NASA Astrophysics Data System (ADS)
Lin, Y.; Wukitch, S. J.; Edlund, E.; Ennever, P.; Hubbard, A. E.; Porkolab, M.; Rice, J.; Wright, J.
2017-10-01
In recent three-ion species (majority D and H plus a trace level of 3He) ICRF heating experiments on Alcator C-Mod, double mode conversion on both sides of the 3He cyclotron resonance has been observed using the phase contrast imaging (PCI) system. The MC locations are used to estimate the species concentrations in the plasma. Simulation using TORIC shows that with the 3He level <1%, most RF power is absorbed by the 3He ions and the process can generate energetic 3He ions. In mode conversion (MC) flow drive experiment in D(3He) plasma at 8 T, MC waves were also monitored by PCI. The MC ion cyclotron wave (ICW) amplitude and wavenumber kR have been found to correlate with the flow drive force. The MC efficiency, wave-number k of the MC ICW and their dependence on plasma parameters like Te0 have been studied. Based on the experimental observation and numerical study of the dispersion solutions, a hypothesis of the flow drive mechanism has been proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daryl Leon Wasden; Hussein Moradi; Behrouz Farhang-Broujeny
2014-06-01
This paper presents a theoretical analysis of the performance of a filter bank-based multicarrier spread spectrum (FB-MC-SS) system. We consider an FB-MC-SS setup where each data symbol is spread across multiple subcarriers, but there is no spreading in time. The results are then compared with those of the well-known direct sequence spread spectrum (DS-SS) system with a rake receiver for its best performance. We compare the two systems when the channel noise is white. We prove that as the processing gains of the two systems tend to infinity both approach the same performance. However, numerical simulations show that, in practice,more » where processing gain is limited, FB-MC-SS outperforms DS-SS.« less
Influence of photon energy cuts on PET Monte Carlo simulation results.
Mitev, Krasimir; Gerganov, Georgi; Kirov, Assen S; Schmidtlein, C Ross; Madzhunkov, Yordan; Kawrakow, Iwan
2012-07-01
The purpose of this work is to study the influence of photon energy cuts on the results of positron emission tomography (PET) Monte Carlo (MC) simulations. MC simulations of PET scans of a box phantom and the NEMA image quality phantom are performed for 32 photon energy cut values in the interval 0.3-350 keV using a well-validated numerical model of a PET scanner. The simulations are performed with two MC codes, egs_pet and GEANT4 Application for Tomographic Emission (GATE). The effect of photon energy cuts on the recorded number of singles, primary, scattered, random, and total coincidences as well as on the simulation time and noise-equivalent count rate is evaluated by comparing the results for higher cuts to those for 1 keV cut. To evaluate the effect of cuts on the quality of reconstructed images, MC generated sinograms of PET scans of the NEMA image quality phantom are reconstructed with iterative statistical reconstruction. The effects of photon cuts on the contrast recovery coefficients and on the comparison of images by means of commonly used similarity measures are studied. For the scanner investigated in this study, which uses bismuth germanate crystals, the transport of Bi X(K) rays must be simulated in order to obtain unbiased estimates for the number of singles, true, scattered, and random coincidences as well as for an unbiased estimate of the noise-equivalent count rate. Photon energy cuts higher than 170 keV lead to absorption of Compton scattered photons and strongly increase the number of recorded coincidences of all types and the noise-equivalent count rate. The effect of photon cuts on the reconstructed images and the similarity measures used for their comparison is statistically significant for very high cuts (e.g., 350 keV). The simulation time decreases slowly with the increase of the photon cut. The simulation of the transport of characteristic x rays plays an important role, if an accurate modeling of a PET scanner system is to be achieved. The simulation time decreases slowly with the increase of the cut which, combined with the accuracy loss at high cuts, means that the usage of high photon energy cuts is not recommended for the acceleration of MC simulations.
Implicit Large Eddy Simulation of a wingtip vortex at Rec =1.2x106
NASA Astrophysics Data System (ADS)
Lombard, Jean-Eloi; Moxey, Dave; Sherwin, Spencer; SherwinLab Team
2015-11-01
We present recent developments in numerical methods for performing a Large Eddy Simulation (LES) of the formation and evolution of a wingtip vortex. The development of these vortices in the near wake, in combination with the large Reynolds numbers present in these cases, make these types of test cases particularly challenging to investigate numerically. To demonstrate the method's viability, we present results from numerical simulations of flow over a NACA 0012 profile wingtip at Rec = 1.2 x106 and compare them against experimental data, which is to date the highest Reynolds number achieved for a LES that has been correlated with experiments for this test case. Our model correlates favorably with experiment, both for the characteristic jetting in the primary vortex and pressure distribution on the wing surface. The proposed method is of general interest for the modeling of transitioning vortex dominated flows over complex geometries. McLaren Racing/Royal Academy of Engineering Research Chair.
Computational fluid dynamics applications at McDonnel Douglas
NASA Technical Reports Server (NTRS)
Hakkinen, R. J.
1987-01-01
Representative examples are presented of applications and development of advanced Computational Fluid Dynamics (CFD) codes for aerodynamic design at the McDonnell Douglas Corporation (MDC). Transonic potential and Euler codes, interactively coupled with boundary layer computation, and solutions of slender-layer Navier-Stokes approximation are applied to aircraft wing/body calculations. An optimization procedure using evolution theory is described in the context of transonic wing design. Euler methods are presented for analysis of hypersonic configurations, and helicopter rotors in hover and forward flight. Several of these projects were accepted for access to the Numerical Aerodynamic Simulation (NAS) facility at the NASA-Ames Research Center.
Mak, Chi H
2015-11-25
While single-stranded (ss) segments of DNAs and RNAs are ubiquitous in biology, details about their structures have only recently begun to emerge. To study ssDNA and RNAs, we have developed a new Monte Carlo (MC) simulation using a free energy model for nucleic acids that has the atomisitic accuracy to capture fine molecular details of the sugar-phosphate backbone. Formulated on the basis of a first-principle calculation of the conformational entropy of the nucleic acid chain, this free energy model correctly reproduced both the long and short length-scale structural properties of ssDNA and RNAs in a rigorous comparison against recent data from fluorescence resonance energy transfer, small-angle X-ray scattering, force spectroscopy and fluorescence correlation transport measurements on sequences up to ∼100 nucleotides long. With this new MC algorithm, we conducted a comprehensive investigation of the entropy landscape of small RNA stem-loop structures. From a simulated ensemble of ∼10(6) equilibrium conformations, the entropy for the initiation of different size RNA hairpin loops was computed and compared against thermodynamic measurements. Starting from seeded hairpin loops, constrained MC simulations were then used to estimate the entropic costs associated with propagation of the stem. The numerical results provide new direct molecular insights into thermodynaimc measurement from macroscopic calorimetry and melting experiments.
Toward GPGPU accelerated human electromechanical cardiac simulations
Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David
2014-01-01
In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492
Mermigkis, Panagiotis G; Tsalikis, Dimitrios G; Mavrantzas, Vlasis G
2015-10-28
A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrix and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, D(eff), of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, D(eff) is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for D(eff) as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on D(eff) (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated them with the time needed for penetrant water molecules to explore the available large, fast-diffusing CNT pores before Fickian diffusion is reached.
NASA Astrophysics Data System (ADS)
Mermigkis, Panagiotis G.; Tsalikis, Dimitrios G.; Mavrantzas, Vlasis G.
2015-10-01
A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrix and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, Deff, of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, Deff is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for Deff as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on Deff (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated them with the time needed for penetrant water molecules to explore the available large, fast-diffusing CNT pores before Fickian diffusion is reached.
A Multi-Scale Modeling Framework for Shear Initiated Reactions in Energetic Materials
2013-07-01
Laboratory, 2004. 10. Fermen-Coker, M., “Numerical Simulation of Adiabatic Shear Bands in Ti-6Al-4V Alloy Due to Fragment Impact,” ARL-RP-91; U.S...V.G., “Application of the Morse Potential Function to Cubic Metals” Phys. Rev., Vol. 114, pp. 687- 690 , 1959. 15. McQuarrie, D.A., Statistical
Mode conversion in ICRF experiments on Alcator C-Mod
NASA Astrophysics Data System (ADS)
Lin, Y.; Wukitch, S. J.; Edlund, E.; Ennever, P.; Hubbard, A. E.; Porkolab, M.; Rice, J.; Wright, J.
2017-10-01
In recent three-ion species (majority D and H plus a trace level of 3He) ICRF heating experiment on Alcator C-Mod, double mode conversion on both sides of the 3He cyclotron resonance has been observed using the phase contrast imaging (PCI) system. The MC locations are used to estimate the species concentrations in the plasma. Simulation using TORIC shows that with the 3He level <1%, most RF power is absorbed by the 3He ions and the process can generate energetic 3He ions. In recent mode conversion flow drive experiment in D(3He) plasma at 8 T, MC waves were also monitored by PCI. The MC ion cyclotron wave (ICW) amplitude and wavenumber kR have been found to correlate with the flow drive force. The MC efficiency, wave-number k of the MC ICW and their dependence on plasma parameters like Te0 are shown to play important roles. Based on the experimental observation and numerical study of the dispersion solutions, a hypothesis of the flow drive mechanism has been proposed. Supported by USDoE awards DE-FC02-99ER54512.
Liu, Jian; Pedroza, Luana S; Misch, Carissa; Fernández-Serra, Maria V; Allen, Philip B
2014-07-09
We present total energy and force calculations for the (GaN)1-x(ZnO)x alloy. Site-occupancy configurations are generated from Monte Carlo (MC) simulations, on the basis of a cluster expansion model proposed in a previous study. Local atomic coordinate relaxations of surprisingly large magnitude are found via density-functional calculations using a 432-atom periodic supercell, for three representative configurations at x = 0.5. These are used to generate bond-length distributions. The configurationally averaged composition- and temperature-dependent short-range order (SRO) parameters of the alloys are discussed. The entropy is approximated in terms of pair distribution statistics and thus related to SRO parameters. This approximate entropy is compared with accurate numerical values from MC simulations. An empirical model for the dependence of the bond length on the local chemical environments is proposed.
NASA Astrophysics Data System (ADS)
Keller, Tobias; Katz, Richard F.
2015-04-01
Laboratory experiments indicate that even small concentrations volatiles (H2O or CO2) in the upper mantle significantly affect the silicate melting behavior [HK96,DH06]. The presence of volatiles stabilizes volatile-rich melt at high pressure, thus vastly increasing the volume of the upper mantle expected to be partially molten [H10,DH10]. These small-degree melts have important consequences for chemical differentiation and could affect the dynamics of mantle flow. We have developed theory and numerical implementation to simulate thermo-chemically coupled magma/mantle dynamics in terms of a two-phase (rock+melt), three component (dunite+MORB+volatilized MORB) physical model. The fluid dynamics is based on McKenzie's equations [McK84], while the thermo-chemical formulation of the system is represented by a novel disequilibrium multi-component melting model based on thermo-dynamic theory [RBS11]. This physical model is implemented as a parallel, two-dimensional, finite-volume code that leverages tools from the PETSc toolkit. Application of this simulation code to a mid-ocean ridge system suggests that the methodology captures the leading-order features of both hydrated and carbonated mantle melting, including deep, low-degree, volatile-rich melt formation. Melt segregation leads to continuous dynamic thermo-chemical dis-equilibration, while phenomenological reaction rates are applied to continually move the system towards re-equilibration. The simulations will be used first to characterize volatile extraction from the MOR system assuming a chemically homogeneous mantle. Subsequently, simulations will be extended to investigate the consequences of heterogeneity in lithology [KW12] and volatile content. These studies will advance our understanding of the role of volatiles in the dynamic and chemical evolution of the upper mantle. Moreover, they will help to gauge the significance of the coupling between the deep carbon cycle and the ocean/atmosphere system. REFERENCES HK96 Hirth & Kohlstedt (1996), Earth Planet Sci Lett DH06 Dasgupta & Hirschmann (2006), doi:10.1038/nature04612. H10 Hirschmann (2010), doi:10.1016/j.pepi.2009.12.003. DH10 Dasgupta & Hirschmann (2010), doi:10.1016/j.epsl.2010.06.039. McK84 McKenzie (1984), J Pet KW12 Katz & Weatherley (2012), doi: 10.1016/j.epsl.2012.04.042. RBS11 Rudge, Bercovici & Spiegelman (2011), doi: 10.1111/j.1365-246X.2010.04870.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mermigkis, Panagiotis G.; Tsalikis, Dimitrios G.; Institute of Chemical Engineering and High Temperature Chemical Processes, GR 26500 Patras
A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrixmore » and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, D{sub eff}, of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, D{sub eff} is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for D{sub eff} as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on D{sub eff} (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated them with the time needed for penetrant water molecules to explore the available large, fast-diffusing CNT pores before Fickian diffusion is reached.« less
NASA Astrophysics Data System (ADS)
Xiong, Ming; Zheng, Huinan; Wu, S. T.; Wang, Yuming; Wang, Shui
2007-11-01
Numerical studies of the interplanetary "multiple magnetic clouds (Multi-MC)" are performed by a 2.5-dimensional ideal magnetohydrodynamic (MHD) model in the heliospheric meridional plane. Both slow MC1 and fast MC2 are initially emerged along the heliospheric equator, one after another with different time intervals. The coupling of two MCs could be considered as the comprehensive interaction between two systems, each comprising of an MC body and its driven shock. The MC2-driven shock and MC2 body are successively involved into interaction with MC1 body. The momentum is transferred from MC2 to MC1. After the passage of MC2-driven shock front, magnetic field lines in MC1 medium previously compressed by MC2-driven shock are prevented from being restored by the MC2 body pushing. MC1 body undergoes the most violent compression from the ambient solar wind ahead, continuous penetration of MC2-driven shock through MC1 body, and persistent pushing of MC2 body at MC1 tail boundary. As the evolution proceeds, the MC1 body suffers from larger and larger compression, and its original vulnerable magnetic elasticity becomes stiffer and stiffer. So there exists a maximum compressibility of Multi-MC when the accumulated elasticity can balance the external compression. This cutoff limit of compressibility mainly decides the maximally available geoeffectiveness of Multi-MC because the geoeffectiveness enhancement of MCs interacting is ascribed to the compression. Particularly, the greatest geoeffectiveness is excited among all combinations of each MC helicity, if magnetic field lines in the interacting region of Multi-MC are all southward. Multi-MC completes its final evolutionary stage when the MC2-driven shock is merged with MC1-driven shock into a stronger compound shock. With respect to Multi-MC geoeffectiveness, the evolution stage is a dominant factor, whereas the collision intensity is a subordinate one. The magnetic elasticity, magnetic helicity of each MC, and compression between each other are the key physical factors for the formation, propagation, evolution, and resulting geoeffectiveness of interplanetary Multi-MC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Schuemann, J
2015-06-15
Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco
2014-01-01
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Lin, Guang
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less
NASA Astrophysics Data System (ADS)
Fairbanks, Hillary R.; Doostan, Alireza; Ketelsen, Christian; Iaccarino, Gianluca
2017-07-01
Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer grids become smaller, hence less variable, fewer MC realizations of finer grid solutions are needed to compute the difference expectations, thus leading to a reduction in the overall work. This paper presents an extension of MLMC, referred to as multilevel control variates (MLCV), where a low-rank approximation to the solution on each grid, obtained primarily based on coarser grid solutions, is used as a control variate for estimating the expectations involved in MLMC. Cost estimates as well as numerical examples are presented to demonstrate the advantage of this new MLCV approach over the standard MLMC when the solution of interest admits a low-rank approximation and the cost of simulating finer grids grows fast.
NASA Astrophysics Data System (ADS)
Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying
2015-04-01
In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.
Microcanonical ensemble simulation method applied to discrete potential fluids
NASA Astrophysics Data System (ADS)
Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro
2015-09-01
In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Shoudi; He, Jiansen; Yang, Liping
The impact of an overtaking fast shock on a magnetic cloud (MC) is a pivotal process in CME–CME (CME: coronal mass ejection) interactions and CME–SIR (SIR: stream interaction region) interactions. MC with a strong and rotating magnetic field is usually deemed a crucial part of CMEs. To study the impact of a fast shock on an MC, we perform a 2.5 dimensional numerical magnetohydrodynamic simulation. Two cases are run in this study: without and with impact by fast shock. In the former case, the MC expands gradually from its initial state and drives a relatively slow magnetic reconnection with themore » ambient magnetic field. Analyses of forces near the core of the MC as a whole body indicates that the solar gravity is quite small compared to the Lorentz force and the pressure gradient force. In the second run, a fast shock propagates, relative to the background plasma, at a speed twice that of the perpendicular fast magnetosonic speed, catches up with and takes over the MC. Due to the penetration of the fast shock, the MC is highly compressed and heated, with the temperature growth rate enhanced by a factor of about 10 and the velocity increased to about half of the shock speed. The magnetic reconnection with ambient magnetic field is also sped up by a factor of two to four in reconnection rate as a result of the enhanced density of the current sheet, which is squeezed by the forward motion of the shocked MC.« less
NASA Astrophysics Data System (ADS)
Manganaro, L.; Russo, G.; Bourhaleb, F.; Fausti, F.; Giordanengo, S.; Monaco, V.; Sacchi, R.; Vignati, A.; Cirio, R.; Attili, A.
2018-04-01
One major rationale for the application of heavy ion beams in tumour therapy is their increased relative biological effectiveness (RBE). The complex dependencies of the RBE on dose, biological endpoint, position in the field etc require the use of biophysical models in treatment planning and clinical analysis. This study aims to introduce a new software, named ‘Survival’, to facilitate the radiobiological computations needed in ion therapy. The simulation toolkit was written in C++ and it was developed with a modular architecture in order to easily incorporate different radiobiological models. The following models were successfully implemented: the local effect model (LEM, version I, II and III) and variants of the microdosimetric-kinetic model (MKM). Different numerical evaluation approaches were also implemented: Monte Carlo (MC) numerical methods and a set of faster analytical approximations. Among the possible applications, the toolkit was used to reproduce the RBE versus LET for different ions (proton, He, C, O, Ne) and different cell lines (CHO, HSG). Intercomparison between different models (LEM and MKM) and computational approaches (MC and fast approximations) were performed. The developed software could represent an important tool for the evaluation of the biological effectiveness of charged particles in ion beam therapy, in particular when coupled with treatment simulations. Its modular architecture facilitates benchmarking and inter-comparison between different models and evaluation approaches. The code is open source (GPL2 license) and available at https://github.com/batuff/Survival.
2006-10-01
The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W
NASA Astrophysics Data System (ADS)
Chen, Yanping; Chen, Yisha; Yan, Huangping; Wang, Xiaoling
2017-01-01
Early detection of knee osteoarthritis (KOA) is meaningful to delay or prevent the onset of osteoarthritis. In consideration of structural complexity of knee joint, position of light incidence and detector appears to be extremely important in optical inspection. In this paper, the propagation of 780-nm near infrared photons in three-dimensional knee joint model is simulated by Monte Carlo (MC) method. Six light incident locations are chosen in total to analyze the influence of incident and detecting location on the number of detected signal photons and signal to noise ratio (SNR). Firstly, a three-dimensional photon propagation model of knee joint is reconstructed based on CT images. Then, MC simulation is performed to study the propagation of photons in three-dimensional knee joint model. Photons which finally migrate out of knee joint surface are numerically analyzed. By analyzing the number of signal photons and SNR from the six given incident locations, the optimal incident and detecting location is defined. Finally, a series of phantom experiments are conducted to verify the simulation results. According to the simulation and phantom experiments results, the best incident location is near the right side of meniscus at the rear end of left knee joint and the detector is supposed to be set near patella, correspondingly.
On Fitting a Multivariate Two-Part Latent Growth Model
Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.
2017-01-01
A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054
Monte-Carlo simulation of a stochastic differential equation
NASA Astrophysics Data System (ADS)
Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG
2017-12-01
For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.
NASA Astrophysics Data System (ADS)
Tarasenko, Alexander
2018-01-01
Diffusion of particles adsorbed on a homogeneous one-dimensional lattice is investigated using a theoretical approach and MC simulations. The analytical dependencies calculated in the framework of approach are tested using the numerical data. The perfect coincidence of the data obtained by these different methods demonstrates that the correctness of the approach based on the theory of the non-equilibrium statistical operator.
Layering of sustained vortices in rotating stratified fluids
NASA Astrophysics Data System (ADS)
Aubert, O.; Le Bars, M.; Le Gal, P.
2013-05-01
The ocean is a natural stratified fluid layer where large structures are influenced by the rotation of the planet through the Coriolis force. In particular, the ocean Meddies are long-lived anticyclonic pancake vortices of Mediterranean origin evolving in the Atlantic Ocean: they have a saltier and warmer core than the sourrounding oceanic water, their diameters go up to 100 km and they can survive for 2 to 3 years in the ocean. Their extensive study using seismic images revealed finestructures surrounding their core (Biescas et al., 2008; Ruddick et al., 2009) corresponding to layers of constant density which thickness is about 40 m and horizontal extent is more than 10 km. These layers can have different origins: salt fingers from a double-diffusive instabilities of salt and heat (Ruddick & Gargett, 2003), viscous overturning motions from a double-diffusive instabilities of salt and momentum (McIntyre, 1970) or global modes of the quasi-geostrophic instability (Nguyen et al., 2011)? As observed by Griffiths & Linden (1981), sustained laboratory anticyclonic vortices created via a continuous injection of isodense fluid in a rotating and linearly stratified layer of salty water are quickly surrounded by layers of constant density. In the continuity of their experiments, we systematically investigated the double-diffusive instability of McIntyre by varying the Coriolis parameter f and the buoyancy frequency N of the background both in experiments and in numerical simulations, and studied the influence of the Schmidt number in numerical simulations. Following McIntyre's approach, typical length and time scales of the instability are well described by a linear stability analysis based on a gaussian model that fits both laboratory and oceanic vortices. The instability appears to be favoured by high Rossby numbers and ratios f/N. We then apply these results to ocean Meddies and conclude about their stability.
Mechem, David B.; Giangrande, Scott E.
2018-03-01
Here, the controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large–eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud topmore » occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time–varying three–dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mechem, David B.; Giangrande, Scott E.
Here, the controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large–eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud topmore » occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time–varying three–dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.« less
NASA Astrophysics Data System (ADS)
Mechem, David B.; Giangrande, Scott E.
2018-03-01
Controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large-eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud top occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time-varying three-dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.
NASA Astrophysics Data System (ADS)
Savarin, A.; Chen, S. S.
2017-12-01
The Madden-Julian Oscillation (MJO) is a dominant mode of intraseasonal variability in the tropics. Large-scale convection fueling the MJO is initiated over the tropical Indian Ocean and propagates eastward across the Maritime Continent (MC) and into the western Pacific. Observational studies have shown that near 40-50% of the MJO events cannot pass through the MC, which is known as the MC barrier effect. Previous studies have also shown a strong diurnal cycle of convection over the islands and coastal seas, with an afternoon precipitation maximum over land and high terrain, and an early morning maximum over water and mountain valley areas. As an eastward-propagating MJO convective event passes over the MC, its nature may be altered due to the complex interaction with the large Islands and topography. In turn, the passage of an MJO event modulates local conditions over the MC. The diurnal cycle of convection over the MC and its modulation by the MJO are not well understood and poorly represented in global numerical prediction models. This study aims to improve our understanding of how the diurnal cycle of convection and the presence of islands of the MC affect the eastward propagation of the MJO over the region. To this end, we use the Unified Wave Interface-Coupled Model (UWIN-CM) in its fully-coupled atmosphere-ocean configuration at a convection-permitting (4 km) resolution over the region. The control simulation is from the MJO event that occurred in November-December 2011, and has been verified against the Dynamics of the MJO (DYNAMO) field campaign observations, TRMM precipitation, and reanalysis products. To investigate the effects of the tropical islands on the MJO, we conduct two additional numerical experiments, one with preserved island shape but flattened topography, and one where islands are replaced by water. The difference in the diurnal cycle and convective organization among these experiments will provide some insights on the origin of the MC barrier effect and the physical processes affecting MJO convection over the MC. It is hypothesized that flattening terrain modifies the locations of diurnal precipitation maxima over islands and surrounding seas, while removing islands results in a smoother eastward propagation of the MJO.
Wave-Current Interactions in a wind-jet region
NASA Astrophysics Data System (ADS)
Ràfols, Laura; Grifoll, Manel; Espino, Manuel; Cerralbo, Pablo; Sairouní, Abdel; Bravo, Manel; Sánchez-Arcilla, Agustín
2017-04-01
The Wave-Current Interactions (WCI) are investigated examining the influences of coupling two numerical models. The Regional Ocean Model System (ROMS; Shchepetkin and McWilliams, 2005) and the Simulating Waves Nearshore (SWAN; Booij et al. 1999) are used in a high resolution domain (350 m). For the initial and boundary conditions, data from the IBI-MFC products have been used and the atmospheric forcing fields have been obtained from the Catalan Meteorological Service (SMC). Results from uncoupled numerical models are compared with one-way and two-way coupling simulations. The study area is located at the northern margin of the Ebro Shelf (NW Mediterranean Sea), where episodes of strong cross-shelf wind occur. The results show that during these episodes, the water currents obtained in the two-way simulation have better agreement with the observations compared with the other simulations. Additionally, when the water currents are considered, the wave energy (and thus the significant wave heigh) decrease when the current flows in the same direction as waves propagate. The relative importance of the different terms of the momentum balance equation is also analyzed.
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
Methods for Monte Carlo simulations of biomacromolecules
Vitalis, Andreas; Pappu, Rohit V.
2010-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473
Synergism of the method of characteristics and CAD technology for neutron transport calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Z.; Wang, D.; He, T.
2013-07-01
The method of characteristics (MOC) is a very popular methodology in neutron transport calculation and numerical simulation in recent decades for its unique advantages. One of the key problems determining whether the MOC can be applied in complicated and highly heterogeneous geometry is how to combine an effective geometry processing method with MOC. Most of the existing MOC codes describe the geometry by lines and arcs with extensive input data, such as circles, ellipses, regular polygons and combination of them. Thus they have difficulty in geometry modeling, background meshing and ray tracing for complicated geometry domains. In this study, amore » new idea making use of a CAD solid modeler MCAM which is a CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport developed by FDS Team in China was introduced for geometry modeling and ray tracing of particle transport to remove these geometrical limitations mentioned above. The diamond-difference scheme was applied to MOC to reduce the spatial discretization error of the flat flux approximation in theory. Based on MCAM and MOC, a new MOC code was developed and integrated into SuperMC system, which is a Super Multi-function Computational system for neutronics and radiation simulation. The numerical testing results demonstrated the feasibility and effectiveness of the new idea for geometry treatment in SuperMC. (authors)« less
Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation
NASA Astrophysics Data System (ADS)
Huang, Aiping; Tao, Linwei; Niu, Yilong
2018-04-01
In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.
MC ray-tracing optimization of lobster-eye focusing devices with RESTRAX
NASA Astrophysics Data System (ADS)
Šaroun, Jan; Kulda, Jiří
2006-11-01
The enhanced functionalities of the latest version of the RESTRAX software, providing a high-speed Monte Carlo (MC) ray-tracing code to represent a virtual three-axis neutron spectrometer, include representation of parabolic and elliptic guide profiles and facilities for numerical optimization of parameter values, characterizing the instrument components. As examples, we present simulations of a doubly focusing monochromator in combination with cold neutron guides and lobster-eye supermirror devices, concentrating a monochromatic beam to small sample volumes. A Levenberg-Marquardt minimization algorithm is used to optimize simultaneously several parameters of the monochromator and lobster-eye guides. We compare the performance of optimized configurations in terms of monochromatic neutron flux and energy spread and demonstrate the effect of lobster-eye optics on beam transformations in real and momentum subspaces.
Non-line-of-sight single-scatter propagation model for noncoplanar geometries.
Elshimy, Mohamed A; Hranilovic, Steve
2011-03-01
In this paper, a geometrical propagation model is developed that generalizes the classical single-scatter model under the assumption of first-order scattering and non-line-of-sight (NLOS) communication. The generalized model considers the case of a noncoplanar geometry, where it overcomes the restriction that the transmitter and the receiver cone axes lie in the same plane. To verify the model, a Monte Carlo (MC) radiative transfer model based on a photon transport algorithm is constructed. Numerical examples for a wavelength of 266 nm are illustrated, which corresponds to a solar-blind NLOS UV communication system. A comparison of the temporal responses of the generalized model and the MC simulation results shows close agreement. Path loss and delay spread are also shown for different pointing directions.
Stochastic Rotation Dynamics simulations of wetting multi-phase flows
NASA Astrophysics Data System (ADS)
Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin
2016-06-01
Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.
Morikami, Kenji; Itezono, Yoshiko; Nishimoto, Masahiro; Ohta, Masateru
2014-01-01
Compounds with a medium-sized flexible ring often show atropisomerism that is caused by the high-energy barriers between long-lived conformers that can be isolated and often have different biological properties to each other. In this study, the frequency of the transition between the two stable conformers, aS and aR, of thienotriazolodiazepine compounds with flexible 7-membered rings was estimated computationally by Monte Carlo (MC) simulations and validated experimentally by NMR experiments. To estimate the energy barriers for transitions as precisely as possible, the potential energy (PE) surfaces used in the MC simulations were calculated by molecular orbital (MO) methods. To accomplish the MC simulations with the MO-based PE surfaces in a practical central processing unit (CPU) time, the MO-based PE of each conformer was pre-calculated and stored before the MC simulations, and then only referred to during the MC simulations. The activation energies for transitions calculated by the MC simulations agreed well with the experimental ΔG determined by the NMR experiments. The analysis of the transition trajectories of the MC simulations revealed that the transition occurred not only through the transition states, but also through many different transition paths. Our computational methods gave us quantitative estimates of atropisomerism of the thienotriazolodiazepine compounds in a practical period of time, and the method could be applicable for other slow-dynamics phenomena that cannot be investigated by other atomistic simulations.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Numerical Simulation of the Permeable Base Transistor.
1987-05-04
chi report has ben reviWed d saprovco for Ptb"Ic r!a4e , -AW AFR 190.12.Distrquti.n Is unlimitedMATTH-: J. KERPERChief# Technca I.1omt DivisIoM 87 5 21...significant clustering within the vicinity of the base region. Further, a cursory examination of the unscaled contours (figure 6) and the depletion...be published by the authors. 8. E.L. Chaffee, Theory of Thermionic Vacuum Tubes, McGraw-Hill (1933), Cf figures 75 and 76. 9. Y. Avano, K . Tomizawa and
Impact of magnitude uncertainties on seismic catalogue properties
NASA Astrophysics Data System (ADS)
Leptokaropoulos, K. M.; Adamaki, A. K.; Roberts, R. G.; Gkarlaouni, C. G.; Paradisopoulou, P. M.
2018-05-01
Catalogue-based studies are of central importance in seismological research, to investigate the temporal, spatial and size distribution of earthquakes in specified study areas. Methods for estimating the fundamental catalogue parameters like the Gutenberg-Richter (G-R) b-value and the completeness magnitude (Mc) are well established and routinely applied. However, the magnitudes reported in seismicity catalogues contain measurement uncertainties which may significantly distort the estimation of the derived parameters. In this study, we use numerical simulations of synthetic data sets to assess the reliability of different methods for determining b-value and Mc, assuming the G-R law validity. After contaminating the synthetic catalogues with Gaussian noise (with selected standard deviations), the analysis is performed for numerous data sets of different sample size (N). The noise introduced to the data generally leads to a systematic overestimation of magnitudes close to and above Mc. This fact causes an increase of the average number of events above Mc, which in turn leads to an apparent decrease of the b-value. This may result to a significant overestimation of seismicity rate even well above the actual completeness level. The b-value can in general be reliably estimated even for relatively small data sets (N < 1000) when only magnitudes higher than the actual completeness level are used. Nevertheless, a correction of the total number of events belonging in each magnitude class (i.e. 0.1 unit) should be considered, to deal with the magnitude uncertainty effect. Because magnitude uncertainties (here with the form of Gaussian noise) are inevitable in all instrumental catalogues, this finding is fundamental for seismicity rate and seismic hazard assessment analyses. Also important is that for some data analyses significant bias cannot necessarily be avoided by choosing a high Mc value for analysis. In such cases, there may be a risk of severe miscalculation of seismicity rate regardless the selected magnitude threshold, unless possible bias is properly assessed.
Interfacing MCNPX and McStas for simulation of neutron transport
NASA Astrophysics Data System (ADS)
Klinkby, Esben; Lauritzen, Bent; Nonbøl, Erik; Kjær Willendrup, Peter; Filges, Uwe; Wohlmuther, Michael; Gallmeier, Franz X.
2013-02-01
Simulations of target-moderator-reflector system at spallation sources are conventionally carried out using Monte Carlo codes such as MCNPX (Waters et al., 2007 [1]) or FLUKA (Battistoni et al., 2007; Ferrari et al., 2005 [2,3]) whereas simulations of neutron transport from the moderator and the instrument response are performed by neutron ray tracing codes such as McStas (Lefmann and Nielsen, 1999; Willendrup et al., 2004, 2011a,b [4-7]). The coupling between the two simulation suites typically consists of providing analytical fits of MCNPX neutron spectra to McStas. This method is generally successful but has limitations, as it e.g. does not allow for re-entry of neutrons into the MCNPX regime. Previous work to resolve such shortcomings includes the introduction of McStas inspired supermirrors in MCNPX. In the present paper different approaches to interface MCNPX and McStas are presented and applied to a simple test case. The direct coupling between MCNPX and McStas allows for more accurate simulations of e.g. complex moderator geometries, backgrounds, interference between beam-lines as well as shielding requirements along the neutron guides.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A, Popescu I; Lobo, J; Sawkey, D
2014-06-15
Purpose: To simulate and measure radiation backscattered into the monitor chamber of a TrueBeam linac; establish a rigorous framework for absolute dose calculations for TrueBeam Monte Carlo (MC) simulations through a novel approach, taking into account the backscattered radiation and the actual machine output during beam delivery; improve agreement between measured and simulated relative output factors. Methods: The ‘monitor backscatter factor’ is an essential ingredient of a well-established MC absolute dose formalism (the MC equivalent of the TG-51 protocol). This quantity was determined for the 6 MV, 6X FFF, and 10X FFF beams by two independent Methods: (1) MC simulationsmore » in the monitor chamber of the TrueBeam linac; (2) linac-generated beam record data for target current, logged for each beam delivery. Upper head MC simulations used a freelyavailable manufacturer-provided interface to a cloud-based platform, allowing use of the same head model as that used to generate the publicly-available TrueBeam phase spaces, without revealing the upper head design. The MC absolute dose formalism was expanded to allow direct use of target current data. Results: The relation between backscatter, number of electrons incident on the target for one monitor unit, and MC absolute dose was analyzed for open fields, as well as a jaw-tracking VMAT plan. The agreement between the two methods was better than 0.15%. It was demonstrated that the agreement between measured and simulated relative output factors improves across all field sizes when backscatter is taken into account. Conclusion: For the first time, simulated monitor chamber dose and measured target current for an actual TrueBeam linac were incorporated in the MC absolute dose formalism. In conjunction with the use of MC inputs generated from post-delivery trajectory-log files, the present method allows accurate MC dose calculations, without resorting to any of the simplifying assumptions previously made in the TrueBeam MC literature. This work has been partially funded by Varian Medical Systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Gao, M
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less
NASA Astrophysics Data System (ADS)
Schreiber, M. E.; Zwolinski, M. D.; Taglia, P. J.; Bahr, J. M.; Hickey, W. J.
2001-05-01
We are investigating the role of anaerobic processes that control field-scale BTEX loss using a variety of experimental and numerical techniques. Tracer tests, laboratory microcosms, and in situ microcosms (ISMs) were designed to examine BTEX biodegradation under intrinsic and enhanced anaerobic conditions in a BTEX plume at Fort McCoy, WI. In the tracer tests, addition of nitrate resulted in loss of toluene, ethylbenzene, and m, p-xylenes but not benzene. Laboratory microcosm and ISM experiments confirmed that nitrate addition is not likely to enhance benzene biodegradation at the site. Excess nitrate losses were observed in both field and laboratory experiments, indicating that reliance on theoretical stoichiometric equations to estimate contaminant mass losses should be re-evaluated. To examine changes in microbial community during biodegradation of BTEX under enhanced nitrate-reducing conditions, DNA was extracted from laboratory microcosm sediment, the 16S-rRNA gene was amplified using eubacterial primers, and products were separated by denaturing gradient gel electrophoresis. Banding patterns suggest that nitrate caused more of a community change than BTEX. These data suggest that nitrate plays an important role in microbial population selection. Numerical simulations were conducted to simulate the evolution of the BTEX plume and to quantify BTEX losses due to intrinsic and nitrate-enhanced biodegradation. Results suggest that the majority of intrinsic BTEX mass loss has occurred under aerobic and iron-reducing conditions. Due to depletion of solid-phase Fe(III) over time, however, future BTEX losses under iron-reducing conditions will decrease, and methanogenesis will play an increasingly important role in controlling biodegradation. The simulations also suggest that although nitrate addition will decrease TEX concentrations, source removal with intrinsic biodegradation is likely the most effective treatment method for the site.
Finite element simulations of the Portevin Le Chatelier effect in aluminium alloy
NASA Astrophysics Data System (ADS)
Hopperstad, O. S.; Børvik, T.; Berstad, T.; Benallal, A.
2006-08-01
Finite element simulations of the Portevin-Le Chatelier effect in aluminium alloy 5083-H116 are presented and evaluated against existing experimental results. The constitutive model of McCormick (1988) for materials exhibiting negative steady-state strain-rate sensitivity is incorporated into an elastic-viscoplastic model for large plastic deformations and implemented in LS-DYNA for use with the explicit or implicit solver. Axisymmetric tensile specimens loaded at different strain rates are studied numerically, and it is shown that the model predicts the experimental behaviour with reasonable accuracy; including serrated yielding and propagating bands of localized plastic deformation along the gauge length of the specimen at intermediate strain rates.
NASA Astrophysics Data System (ADS)
Tian, Liang; Wilkinson, Richard; Yang, Zhibing; Power, Henry; Fagerlund, Fritjof; Niemi, Auli
2017-08-01
We explore the use of Gaussian process emulators (GPE) in the numerical simulation of CO2 injection into a deep heterogeneous aquifer. The model domain is a two-dimensional, log-normally distributed stochastic permeability field. We first estimate the cumulative distribution functions (CDFs) of the CO2 breakthrough time and the total CO2 mass using a computationally expensive Monte Carlo (MC) simulation. We then show that we can accurately reproduce these CDF estimates with a GPE, using only a small fraction of the computational cost required by traditional MC simulation. In order to build a GPE that can predict the simulator output from a permeability field consisting of 1000s of values, we use a truncated Karhunen-Loève (K-L) expansion of the permeability field, which enables the application of the Bayesian functional regression approach. We perform a cross-validation exercise to give an insight of the optimization of the experiment design for selected scenarios: we find that it is sufficient to use 100s values for the size of training set and that it is adequate to use as few as 15 K-L components. Our work demonstrates that GPE with truncated K-L expansion can be effectively applied to uncertainty analysis associated with modelling of multiphase flow and transport processes in heterogeneous media.
Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process
NASA Astrophysics Data System (ADS)
Schwarm, F.-W.; Ballhausen, R.; Falkner, S.; Schönherr, G.; Pottschmidt, K.; Wolff, M. T.; Becker, P. A.; Fürst, F.; Marcu-Cheatham, D. M.; Hemphill, P. B.; Sokolova-Lapa, E.; Dauser, T.; Klochkov, D.; Ferrigno, C.; Wilms, J.
2017-05-01
Context. Cyclotron resonant scattering features (CRSFs) are formed by scattering of X-ray photons off quantized plasma electrons in the strong magnetic field (of the order 1012 G) close to the surface of an accreting X-ray pulsar. Due to the complex scattering cross-sections, the line profiles of CRSFs cannot be described by an analytic expression. Numerical methods, such as Monte Carlo (MC) simulations of the scattering processes, are required in order to predict precise line shapes for a given physical setup, which can be compared to observations to gain information about the underlying physics in these systems. Aims: A versatile simulation code is needed for the generation of synthetic cyclotron lines. Sophisticated geometries should be investigatable by making their simulation possible for the first time. Methods: The simulation utilizes the mean free path tables described in the first paper of this series for the fast interpolation of propagation lengths. The code is parallelized to make the very time-consuming simulations possible on convenient time scales. Furthermore, it can generate responses to monoenergetic photon injections, producing Green's functions, which can be used later to generate spectra for arbitrary continua. Results: We develop a new simulation code to generate synthetic cyclotron lines for complex scenarios, allowing for unprecedented physical interpretation of the observed data. An associated XSPEC model implementation is used to fit synthetic line profiles to NuSTAR data of Cep X-4. The code has been developed with the main goal of overcoming previous geometrical constraints in MC simulations of CRSFs. By applying this code also to more simple, classic geometries used in previous works, we furthermore address issues of code verification and cross-comparison of various models. The XSPEC model and the Green's function tables are available online (see link in footnote, page 1).
spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains
NASA Astrophysics Data System (ADS)
Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo
2016-09-01
The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, J; Micka, J; Culberson, W
Purpose: To determine the in-air azimuthal anisotropy and in-water dose distribution for the 1 cm length of the CivaString {sup 103}Pd brachytherapy source through measurements and Monte Carlo (MC) simulations. American Association of Physicists in Medicine Task Group No. 43 (TG-43) dosimetry parameters were also determined for this source. Methods: The in-air azimuthal anisotropy of the source was measured with a NaI scintillation detector and simulated with the MCNP5 radiation transport code. Measured and simulated results were normalized to their respective mean values and compared. The TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function for this sourcemore » were determined from LiF:Mg,Ti thermoluminescent dosimeter (TLD) measurements and MC simulations. The impact of {sup 103}Pd well-loading variability on the in-water dose distribution was investigated using MC simulations by comparing the dose distribution for a source model with four wells of equal strength to that for a source model with strengths increased by 1% for two of the four wells. Results: NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy showed that ≥95% of the normalized data were within 1.2% of the mean value. TLD measurements and MC simulations of the TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function agreed to within the experimental TLD uncertainties (k=2). MC simulations showed that a 1% variability in {sup 103}Pd well-loading resulted in changes of <0.1%, <0.1%, and <0.3% in the TG-43 dose-rate constant, radial dose distribution, and polar dose distribution, respectively. Conclusion: The CivaString source has a high degree of azimuthal symmetry as indicated by the NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy. TG-43 dosimetry parameters for this source were determined from TLD measurements and MC simulations. {sup 103}Pd well-loading variability results in minimal variations in the in-water dose distribution according to MC simulations. This work was partially supported by CivaTech Oncology, Inc. through an educational grant for Joshua Reed, John Micka, Wesley Culberson, and Larry DeWerd and through research support for Mark Rivard.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Lulin; Fan, Jiwen; Lebo, Zachary J.
The squall line event on May 20, 2011, during the Midlatitude Continental Convective Clouds (MC3E) field campaign has been simulated by three bin (spectral) microphysics schemes coupled into the Weather Research and Forecasting (WRF) model. Semi-idealized three-dimensional simulations driven by temperature and moisture profiles acquired by a radiosonde released in the pre-convection environment at 1200 UTC in Morris, Oklahoma show that each scheme produced a squall line with features broadly consistent with the observed storm characteristics. However, substantial differences in the details of the simulated dynamic and thermodynamic structure are evident. These differences are attributed to different algorithms and numericalmore » representations of microphysical processes, assumptions of the hydrometeor processes and properties, especially ice particle mass, density, and terminal velocity relationships with size, and the resulting interactions between the microphysics, cold pool, and dynamics. This study shows that different bin microphysics schemes, designed to be conceptually more realistic and thus arguably more accurate than bulk microphysics schemes, still simulate a wide spread of microphysical, thermodynamic, and dynamic characteristics of a squall line, qualitatively similar to the spread of squall line characteristics using various bulk schemes. Future work may focus on improving the representation of ice particle properties in bin schemes to reduce this uncertainty and using the similar assumptions for all schemes to isolate the impact of physics from numerics.« less
Numerical simulation studies for optical properties of biomaterials
NASA Astrophysics Data System (ADS)
Krasnikov, I.; Seteikin, A.
2016-11-01
Biophotonics involves understanding how light interacts with biological matter, from molecules and cells, to tissues and even whole organisms. Light can be used to probe biomolecular events, such as gene expression and protein-protein interaction, with impressively high sensitivity and specificity. The spatial and temporal distribution of biochemical constituents can also be visualized with light and, thus, the corresponding physiological dynamics in living cells, tissues, and organisms in real time. Computer-based Monte Carlo (MC) models of light transport in turbid media take a different approach. In this paper, the optical and structural properties of biomaterials discussed. We explain the numerical simulationmethod used for studying the optical properties of biomaterials. Applications of the Monte-Carlo method in photodynamic therapy, skin tissue optics, and bioimaging described.
Interannual variation of mid-summer heavy rainfall in the eastern edge of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Jiang, Xingwen; Li, Yueqing; Yang, Song; Shu, Jianchuan; He, Guangbi
2015-12-01
Heavy rainfall (HR) often hits the eastern edge of the Tibetan Plateau (EETP) and causes severe flood and landslide in summer, especially in July. In this study, the authors investigate the interannual variation of July HR events and its possible causes. The maximum number of days with HR in July is located at the EETP in China. It is significantly and negatively correlated with the rainfall in southeastern China. More HR events are accompanied by an anomalous lower-tropospheric anticyclone over southeastern China, a westward movement of the western North Pacific subtropical high, and enhanced rainfall in the Maritime Continent (MC). The MC convection exerts a significant impact on the variation of HR events over EETP. Results from analyses of observations and numerical simulations indicate that the convective heating over the MC induces an anomalous anticyclone over southeastern China and the Ekman pumping effect and circulation-convection feedback play vital roles in the process. The high correlation between the HR events over EETP and the equatorial central Pacific SST depends on the relationship between the MC convection and the equatorial central Pacific SST. The relationship is asymmetric, and only the warm SST anomaly in the equatorial central Pacific is accompanied by fewer HR events over the EETP.
Gartner, Thomas E; Epps, Thomas H; Jayaraman, Arthi
2016-11-08
We describe an extension of the Gibbs ensemble molecular dynamics (GEMD) method for studying phase equilibria. Our modifications to GEMD allow for direct control over particle transfer between phases and improve the method's numerical stability. Additionally, we found that the modified GEMD approach had advantages in computational efficiency in comparison to a hybrid Monte Carlo (MC)/MD Gibbs ensemble scheme in the context of the single component Lennard-Jones fluid. We note that this increase in computational efficiency does not compromise the close agreement of phase equilibrium results between the two methods. However, numerical instabilities in the GEMD scheme hamper GEMD's use near the critical point. We propose that the computationally efficient GEMD simulations can be used to map out the majority of the phase window, with hybrid MC/MD used as a follow up for conditions under which GEMD may be unstable (e.g., near-critical behavior). In this manner, we can capitalize on the contrasting strengths of these two methods to enable the efficient study of phase equilibria for systems that present challenges for a purely stochastic GEMC method, such as dense or low temperature systems, and/or those with complex molecular topologies.
2012-07-01
du monde de la modélisation et de la simulation et lui fournir des directives de mise en œuvre ; et fournir des ...définition ; rapports avec les normes ; spécification de procédure de gestion de la MC ; spécification d’artefact de MC. Considérations importantes...utilisant la présente directive comme référence. • Les VV&A (vérification, validation et acceptation) des MC doivent faire partie intégrante du
Binding, Thermodynamics, and Selectivity of a Non-peptide Antagonist to the Melanocortin-4 Receptor
Saleh, Noureldin; Kleinau, Gunnar; Heyder, Nicolas; Clark, Timothy; Hildebrand, Peter W.; Scheerer, Patrick
2018-01-01
The melanocortin-4 receptor (MC4R) is a potential drug target for treatment of obesity, anxiety, depression, and sexual dysfunction. Crystal structures for MC4R are not yet available, which has hindered successful structure-based drug design. Using microsecond-scale molecular-dynamics simulations, we have investigated selective binding of the non-peptide antagonist MCL0129 to a homology model of human MC4R (hMC4R). This approach revealed that, at the end of a multi-step binding process, MCL0129 spontaneously adopts a binding mode in which it blocks the agonistic-binding site. This binding mode was confirmed in subsequent metadynamics simulations, which gave an affinity for human hMC4R that matches the experimentally determined value. Extending our simulations of MCL0129 binding to hMC1R and hMC3R, we find that receptor subtype selectivity for hMC4R depends on few amino acids located in various structural elements of the receptor. These insights may support rational drug design targeting the melanocortin systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholls, David P.
Over the past four years the Principal Investigator (PI) David Nicholls has worked on several projects in connection with award DE-SC0001549. Of the greatest import has been the continued supervision of ve Ph.D. students (Robyn Canning, Travis McBride, Andrew Sward, Zheng Fang, and Venu Tammali). Canning and McBride defended their theses and graduated in May 2012, while Sward defended his thesis and graduated in May 2013. Both Fang and Tammali plan to defend their theses within the year and graduate in May 2015. Fang is now a very experienced graduate researcher with one paper accepted for publication and another inmore » preparation. Tammali is nearly to the point of writing a paper and will work this summer as an intern at Argonne National Laboratory in the Mathematics and Computer Science Division under the supervision of Paul Fischer.« less
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein
2016-01-01
The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. From designed door's thickness, the door designed by the MC simulation and Wu-McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations.
Eddy Flow during Magma Emplacement: The Basemelt Sill, Antarctica
NASA Astrophysics Data System (ADS)
Petford, N.; Mirhadizadeh, S.
2014-12-01
The McMurdo Dry Valleys magmatic system, Antarctica, forms part of the Ferrar dolerite Large Igneous Province. Comprising a vertical stack of interconnected sills, the complex provides a world-class example of pervasive lateral magma flow on a continental scale. The lowermost intrusion (Basement Sill) offers detailed sections through the now frozen particle macrostructure of a congested magma slurry1. Image-based numerical modelling where the intrusion geometry defines its own unique finite element mesh allows simulations of the flow regime to be made that incorporate realistic magma particle size and flow geometries obtained directly from field measurements. One testable outcome relates to the origin of rhythmic layering where analytical results imply the sheared suspension intersects the phase space for particle Reynolds and Peclet number flow characteristic of macroscopic structures formation2. Another relates to potentially novel crystal-liquid segregation due to the formation of eddies locally at undulating contacts at the floor and roof of the intrusion. The eddies are transient and mechanical in origin, unrelated to well-known fluid dynamical effects around obstacles where flow is turbulent. Numerical particle tracing reveals that these low Re number eddies can both trap (remove) and eject particles back into the magma at a later time according to their mass density. This trapping mechanism has potential to develop local variations in structure (layering) and magma chemistry that may otherwise not occur where the contact between magma and country rock is linear. Simulations indicate that eddy formation is best developed where magma viscosity is in the range 1-102 Pa s. Higher viscosities (> 103 Pa s) tend to dampen the effect implying eddy development is most likely a transient feature. However, it is nice to think that something as simple as a bumpy contact could impart physical and by implication chemical diversity in igneous rocks. 1Marsh, D.B. (2004), A magmatic mush column Rosetta stone: the McMurdo Dry Valleys of Antarcica. EOS, 85, 497-502. 2Petford, N. (2009), Which Effective Viscosity? Mineralogical Magazine, 73, 167-191. Fig. 1. Numerical simulation in the geometry showing magma flow field and eddy formation where circulating magma is trapped. Streamlines track particle orbits.
Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework
Dunkerley, David A. P.; Tomkowiak, Michael T.; Slagowski, Jordan M.; McCabe, Bradley P.; Funk, Tobias; Speidel, Michael A.
2015-01-01
Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8–6.4% (18.6–31.5 cm acrylic, 100 kV), versus 2.1–4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems. PMID:26113765
Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework.
Dunkerley, David A P; Tomkowiak, Michael T; Slagowski, Jordan M; McCabe, Bradley P; Funk, Tobias; Speidel, Michael A
2015-02-21
Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8-6.4% (18.6-31.5 cm acrylic, 100 kV), versus 2.1-4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems.
Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling
NASA Astrophysics Data System (ADS)
Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.
2018-02-01
A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.
Does phenomenological kinetics provide an adequate description of heterogeneous catalytic reactions?
Temel, Burcin; Meskine, Hakim; Reuter, Karsten; Scheffler, Matthias; Metiu, Horia
2007-05-28
Phenomenological kinetics (PK) is widely used in the study of the reaction rates in heterogeneous catalysis, and it is an important aid in reactor design. PK makes simplifying assumptions: It neglects the role of fluctuations, assumes that there is no correlation between the locations of the reactants on the surface, and considers the reacting mixture to be an ideal solution. In this article we test to what extent these assumptions damage the theory. In practice the PK rate equations are used by adjusting the rate constants to fit the results of the experiments. However, there are numerous examples where a mechanism fitted the data and was shown later to be erroneous or where two mutually exclusive mechanisms fitted well the same set of data. Because of this, we compare the PK equations to "computer experiments" that use kinetic Monte Carlo (kMC) simulations. Unlike in real experiments, in kMC the structure of the surface, the reaction mechanism, and the rate constants are known. Therefore, any discrepancy between PK and kMC must be attributed to an intrinsic failure of PK. We find that the results obtained by solving the PK equations and those obtained from kMC, while using the same rate constants and the same reactions, do not agree. Moreover, when we vary the rate constants in the PK model to fit the turnover frequencies produced by kMC, we find that the fit is not adequate and that the rate constants that give the best fit are very different from the rate constants used in kMC. The discrepancy between PK and kMC for the model of CO oxidation used here is surprising since the kMC model contains no lateral interactions that would make the coverage of the reactants spatially inhomogeneous. Nevertheless, such inhomogeneities are created by the interplay between the rate of adsorption, of desorption, and of vacancy creation by the chemical reactions.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less
Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations
Radak, Brian K.; Roux, Benoît
2016-10-07
Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less
Integration of OpenMC methods into MAMMOTH and Serpent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie; DeHart, Mark; Tumulak, Aaron
OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakabe, D; Ohno, T; Araki, F
Purpose: The purpose of this study was to evaluate the combined organ dose of digital subtraction angiography (DSA) and computed tomography (CT) using a Monte Carlo (MC) simulation on the abdominal intervention. Methods: The organ doses for DSA and CT were obtained with MC simulation and actual measurements using fluorescent-glass dosimeters at 7 abdominal portions in an Alderson-Rando phantom. DSA was performed from three directions: posterior anterior (PA), right anterior oblique (RAO), and left anterior oblique (LAO). The organ dose with MC simulation was compared with actual radiation dose measurements. Calculations for the MC simulation were carried out with themore » GMctdospp (IMPS, Germany) software based on the EGSnrc MC code. Finally, the combined organ dose for DSA and CT was calculated from the MC simulation using the X-ray conditions of a patient with a diagnosis of hepatocellular carcinoma. Results: For DSA from the PA direction, the organ doses for the actual measurements and MC simulation were 2.2 and 2.4 mGy/100 mAs at the liver, respectively, and 3.0 and 3.1 mGy/100 mAs at the spinal cord, while for CT, the organ doses were 15.2 and 15.1 mGy/100 mAs at the liver, and 14.6 and 13.5 mGy/100 mAs at the spinal cord. The maximum difference in organ dose between the actual measurements and the MC simulation was 11.0% of the spleen at PA, 8.2% of the spinal cord at RAO, and 6.1% of left kidney at LAO with DSA and 9.3% of the stomach with CT. The combined organ dose (4 DSAs and 6 CT scans) with the use of actual patient conditions was found to be 197.4 mGy for the liver and 205.1 mGy for the spinal cord. Conclusion: Our method makes it possible to accurately assess the organ dose to patients for abdominal intervention with combined DSA and CT.« less
NASA Astrophysics Data System (ADS)
Zhou, Abel; White, Graeme L.; Davidson, Rob
2018-02-01
Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.
Seismic performance of geosynthetic-soil retaining wall structures
NASA Astrophysics Data System (ADS)
Zarnani, Saman
Vertical inclusions of expanded polystyrene (EPS) placed behind rigid retaining walls were investigated as geofoam seismic buffers to reduce earthquake-induced loads. A numerical model was developed using the program FLAC and the model validated against 1-g shaking table test results of EPS geofoam seismic buffer models. Two constitutive models for the component materials were examined: elastic-perfectly plastic with Mohr-Coulomb (M-C) failure criterion and non-linear hysteresis damping model with equivalent linear method (ELM) approach. It was judged that the M-C model was sufficiently accurate for practical purposes. The mechanical property of interest to attenuate dynamic loads using a seismic buffer was the buffer stiffness defined as K = E/t (E = buffer elastic modulus, t = buffer thickness). For the range of parameters investigated in this study, K ≤50 MN/m3 was observed to be the practical range for the optimal design of these systems. Parametric numerical analyses were performed to generate design charts that can be used for the preliminary design of these systems. A new high capacity shaking table facility was constructed at RMC that can be used to study the seismic performance of earth structures. Reduced-scale models of geosynthetic reinforced soil (GRS) walls were built on this shaking table and then subjected to simulated earthquake loading conditions. In some shaking table tests, combined use of EPS geofoam and horizontal geosynthetic reinforcement layers was investigated. Numerical models were developed using program FLAC together with ELM and M-C constitutive models. Physical and numerical results were compared against predicted values using analysis methods found in the journal literature and in current North American design guidelines. The comparison shows that current Mononobe-Okabe (M-O) based analysis methods could not consistently satisfactorily predict measured reinforcement connection load distributions at all elevations under both static and dynamic loading conditions. The results from GRS model wall tests with combined EPS geofoam and geosynthetic reinforcement layers show that the inclusion of a EPS geofoam layer behind the GRS wall face can reduce earth loads acting on the wall facing to values well below those recorded for conventional GRS wall model configurations.
Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein
2016-01-01
Aim The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. Background High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. Materials and methods The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. Results From designed door's thickness, the door designed by the MC simulation and Wu–McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Conclusion Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations. PMID:26900357
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross
2016-06-01
To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.
MODFLOW-NWT – Robust handling of dry cells using a Newton Formulation of MODFLOW-2005
Hunt, Randal J.; Feinstein, Daniel T.
2012-01-01
The first versions of the widely used groundwater flow model MODFLOW (McDonald and Harbaugh 1988) had a sure but inflexible way of handling unconfined finite-difference aquifer cells where the water table dropped below the bottom of the cell—these "dry cells" were turned inactive for the remainder of the simulation. Problems with this formulation were easily seen, including the potential for inadvertent loss of simulated recharge in the model (Doherty 2001; Painter et al. 2008), and rippling of dry cells through the solution that unacceptably changed the groundwater flow system (Juckem et al. 2006). Moreover, solving problems of the natural world often required the ability to reactivate dry cells when the water table rose above the cell bottom. This seemingly simple desire resulted in a two-decade attempt to include the simulation flexibility while avoiding numerical instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moskvin, V; Pirlepesov, F; Tsiamas, P
Purpose: This study provides an overview of the design and commissioning of the Monte Carlo (MC) model of the spot-scanning proton therapy nozzle and its implementation for the patient plan simulation. Methods: The Hitachi PROBEAT V scanning nozzle was simulated based on vendor specifications using the TOPAS extension of Geant4 code. FLUKA MC simulation was also utilized to provide supporting data for the main simulation. Validation of the MC model was performed using vendor provided data and measurements collected during acceptance/commissioning of the proton therapy machine. Actual patient plans using CT based treatment geometry were simulated and compared to themore » dose distributions produced by the treatment planning system (Varian Eclipse 13.6), and patient quality assurance measurements. In-house MATLAB scripts are used for converting DICOM data into TOPAS input files. Results: Comparison analysis of integrated depth doses (IDDs), therapeutic ranges (R90), and spot shape/sizes at different distances from the isocenter, indicate good agreement between MC and measurements. R90 agreement is within 0.15 mm across all energy tunes. IDDs and spot shapes/sizes differences are within statistical error of simulation (less than 1.5%). The MC simulated data, validated with physical measurements, were used for the commissioning of the treatment planning system. Patient geometry simulations were conducted based on the Eclipse produced DICOM plans. Conclusion: The treatment nozzle and standard option beam model were implemented in the TOPAS framework to simulate a highly conformal discrete spot-scanning proton beam system.« less
Seng, Bunrith; Kaneko, Hidehiro; Hirayama, Kimiaki; Katayama-Hirayama, Keiko
2012-01-01
This paper presents a mathematical model of vertical water movement and a performance evaluation of the model in static pile composting operated with neither air supply nor turning. The vertical moisture content (MC) model was developed with consideration of evaporation (internal and external evaporation), diffusion (liquid and vapour diffusion) and percolation, whereas additional water from substrate decomposition and irrigation was not taken into account. The evaporation term in the model was established on the basis of reference evaporation of the materials at known temperature, MC and relative humidity of the air. Diffusion of water vapour was estimated as functions of relative humidity and temperature, whereas diffusion of liquid water was empirically obtained from experiment by adopting Fick's law. Percolation was estimated by following Darcy's law. The model was applied to a column of composting wood chips with an initial MC of 60%. The simulation program was run for four weeks with calculation span of 1 s. The simulated results were in reasonably good agreement with the experimental results. Only a top layer (less than 20 cm) had a considerable MC reduction; the deeper layers were comparable to the initial MC, and the bottom layer was higher than the initial MC. This model is a useful tool to estimate the MC profile throughout the composting period, and could be incorporated into biodegradation kinetic simulation of composting.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.
Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A
2011-01-01
Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.
Chetty, Indrin J; Curran, Bruce; Cygler, Joanna E; DeMarco, John J; Ezzell, Gary; Faddegon, Bruce A; Kawrakow, Iwan; Keall, Paul J; Liu, Helen; Ma, C M Charlie; Rogers, D W O; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V
2007-12-01
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.
McStas 1.1: a tool for building neutron Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.
2000-03-01
McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.
ERIC Educational Resources Information Center
Amador-Ruiz, Santiago; Gutierrez, David; Martínez-Vizcaíno, Vicente; Gulías-González, Roberto; Pardo-Guijarro, María J.; Sánchez-López, Mairena
2018-01-01
Background: Motor competence (MC) affects numerous aspects of children's daily life. The aims of this study were to: evaluate MC, provide population-based percentile values for MC; and determine the prevalence of developmental coordination disorder (DCD) in Spanish schoolchildren. Methods: This cross-sectional study included 1562 children aged 4…
Blind ICA detection based on second-order cone programming for MC-CDMA systems
NASA Astrophysics Data System (ADS)
Jen, Chih-Wei; Jou, Shyh-Jye
2014-12-01
The multicarrier code division multiple access (MC-CDMA) technique has received considerable interest for its potential application to future wireless communication systems due to its high data rate. A common problem regarding the blind multiuser detectors used in MC-CDMA systems is that they are extremely sensitive to the complex channel environment. Besides, the perturbation of colored noise may negatively affect the performance of the system. In this paper, a new coherent detection method will be proposed, which utilizes the modified fast independent component analysis (FastICA) algorithm, based on approximate negentropy maximization that is subject to the second-order cone programming (SOCP) constraint. The aim of the proposed coherent detection is to provide robustness against small-to-medium channel estimation mismatch (CEM) that may arise from channel frequency response estimation error in the MC-CDMA system, which is modulated by downlink binary phase-shift keying (BPSK) under colored noise. Noncoherent demodulation schemes are preferable to coherent demodulation schemes, as the latter are difficult to implement over time-varying fading channels. Differential phase-shift keying (DPSK) is therefore the natural choice for an alternative modulation scheme. Furthermore, the new blind differential SOCP-based ICA (SOCP-ICA) detection without channel estimation and compensation will be proposed to combat Doppler spread caused by time-varying fading channels in the DPSK-modulated MC-CDMA system under colored noise. In this paper, numerical simulations are used to illustrate the robustness of the proposed blind coherent SOCP-ICA detector against small-to-medium CEM and to emphasize the advantage of the blind differential SOCP-ICA detector in overcoming Doppler spread.
NASA Technical Reports Server (NTRS)
Khambatta, Cyrus F.
2007-01-01
A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.
A virtual source model for Monte Carlo simulation of helical tomotherapy.
Yuan, Jiankui; Rong, Yi; Chen, Quan
2015-01-08
The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM-based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media.
A virtual source model for Monte Carlo simulation of helical tomotherapy
Yuan, Jiankui; Rong, Yi
2015-01-01
The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase‐space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS‐generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of <1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of <2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM‐based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose‐volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM‐based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media. PACS numbers: 87.53.‐j, 87.55.K‐ PMID:25679157
Idealized numerical modeling of polar mesocyclones dynamics diagnosed by energy budget
NASA Astrophysics Data System (ADS)
Sergeev, Dennis; Stepanenko, Victor
2014-05-01
Polar mesocyclones (MC) refer to a wide class of mesoscale vortices occuring poleward of the main polar front [1]. Their subtype - polar low - is commonly known for its intensity, that can result in windstorm damage of infrastructure in high latitudes. The observational data sparsity and the small size of polar MCs are major limitations for the clear understanding and numerical prediction of the evolution of these objects. The origin of polar MCs is still a matter of uncertainty, though the recent numerical investigations have exposed a strong dependence of the polar mesocyclone development upon the magnitude of baroclinicity and upon the water vapor concentration in the atmosphere. However, most of the previous studies focused on the individual polar low (the so-called case studies), with too many factors affecting it simultaneously and none of them being dominant in polar MC generation. This study focuses on the early stages of polar MC development within an idealized numerical experiments with mesoscale atmospheric model, where it is possible to look deeper into each single physical process. Our aim is to explain the role of such mechanisms as baroclinic instability or diabatic heating by comparing their contribution to the structure and dynamics of the vortex. The baroclinic instability, as reported by many researchers [2], can be a crucial factor in a MC's life cycle, especially in polar regions. Besides the baroclinic instability several diabatic processes can contribute to the energy generation that fuels a polar mesocyclone. One of the key energy sources in polar regions is surface heat fluxes. The other is the moisture content in the atmosphere that can affect the development of the disturbance by altering the latent heat release. To evaluate the relative importance of the diabatic and baroclinic energy sources for the development of the polar mesocyclone we apply energy diagnostics. In other words, we examine the rate of change of the kinetic energy (that can be interpreted as the growth rate of the vortex) and energy conversion in the diagnostic equations for kinetic and available potential energy (APE). The energy budget equations are implemented in two forms. The first approach follows the scheme developed by Lorenz (1955) in which KE and APE are broken into a mean component and an eddy component forming a well-known energy cycle. The second method is based on the energy equations that are strictly derived from the governing equations of the numerical mesoscale model used. The latter approach, hence, takes into account all the approximations and numerical features used in the model. Some conclusions based on the comparison of the described methods are presented in the study. A series of high-resolution experiments is carried out using three-dimensional non-hydrostatic limited-area sigma-coordinate numerical model ReMeDy (Research Mesoscale Dynamics), being developed at Lomonosov Moscow State University [3]. An idealized basic state condition is used for all simulations. It is composed of the zonally oriented baroclinic zone over the sea surface partly covered with ice. To realize a baroclinic channel environment zero-gradient boundary conditions at the meridional lateral oundaries are imposed, while the zonal boundary conditions are periodic. The initialization of the mesocyclone is achieved by creating a small axis-symmetric vortex in the center of the model domain. The baroclinicity and stratification of the basic state, as well as the surface parameters, are varied in the typically observed range. References 1. Heinemann G, Øyvind S. 2013. Workshop On Polar Lows. Bull. Amer. Meteor. Soc. 94: ES123-ES126. 2. Yanase W, Niino H. 2006. Dependence of Polar Low Development on Baroclinicity and Physical Processes: An Idealized High-Resolution Experiment, J. Atmos. Sci. 64: 3044-3067. 3. Chechin DG et al. 2013. Idealized dry quasi 2-D mesoscale simulations of cold-air outbreaks over the marginal sea ice zone with fine and coarse resolution. J. Geophys. Res. 118: 8787-8813.
FF12MC: A revised AMBER forcefield and new protein simulation protocol
2016-01-01
ABSTRACT Specialized to simulate proteins in molecular dynamics (MD) simulations with explicit solvation, FF12MC is a combination of a new protein simulation protocol employing uniformly reduced atomic masses by tenfold and a revised AMBER forcefield FF99 with (i) shortened C—H bonds, (ii) removal of torsions involving a nonperipheral sp3 atom, and (iii) reduced 1–4 interaction scaling factors of torsions ϕ and ψ. This article reports that in multiple, distinct, independent, unrestricted, unbiased, isobaric–isothermal, and classical MD simulations FF12MC can (i) simulate the experimentally observed flipping between left‐ and right‐handed configurations for C14–C38 of BPTI in solution, (ii) autonomously fold chignolin, CLN025, and Trp‐cage with folding times that agree with the experimental values, (iii) simulate subsequent unfolding and refolding of these miniproteins, and (iv) achieve a robust Z score of 1.33 for refining protein models TMR01, TMR04, and TMR07. By comparison, the latest general‐purpose AMBER forcefield FF14SB locks the C14–C38 bond to the right‐handed configuration in solution under the same protein simulation conditions. Statistical survival analysis shows that FF12MC folds chignolin and CLN025 in isobaric–isothermal MD simulations 2–4 times faster than FF14SB under the same protein simulation conditions. These results suggest that FF12MC may be used for protein simulations to study kinetics and thermodynamics of miniprotein folding as well as protein structure and dynamics. Proteins 2016; 84:1490–1516. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27348292
Shocks in oscillated granular layers
NASA Astrophysics Data System (ADS)
Bougie, J.; Moon, Sung Joon; Swift, J. B.; Swinney, Harry L.
2001-11-01
We study shock formation in vertically oscillated granular layers, where shock waves form with each collision between the layer and the bottom plate of the container. We use both three-dimensional numerical solutions of continuum equations developed by Jenkins and Richman (J.T. Jenkins and M.W. Richman, Arch. Rat. Mech. Anal. 87), 355 (1985) for smooth and nearly elastic hard spheres, and previously validated molecular dynamics (MD) simulations (C. Bizon, M.D. Shattuck, J.B. Swift, W.D. McCormick, and H.L. Swinney, Phys. Rev. Lett. 80), 57 (1998). Both methods capture the shock formation, and the two methods agree quantitatively for small dissipation. We also investigate the effect of inelasticity on shock formation, and use both smooth and rough hard-sphere MD simulations to investigate the effect of friction in this system.
Zhao, Chao; Li, Dawei; Feng, Chuanping; Zhang, Zhenya; Sugiura, Norio; Yang, Yingnan
2015-01-01
A series of advanced WO3-based photocatalysts including CuO/WO3, Pd/WO3, and Pt/WO3 were synthesized for the photocatalytic removal of microcystin-LR (MC-LR) under simulated solar light. In the present study, Pt/WO3 exhibited the best performance for the photocatalytic degradation of MC-LR. The MC-LR degradation can be described by pseudo-first-order kinetic model. Chloride ion (Cl−) with proper concentration could enhance the MC-LR degradation. The presence of metal cations (Cu2+ and Fe3+) improved the photocatalytic degradation of MC-LR. This study suggests that Pt/WO3 photocatalytic oxidation under solar light is a promising option for the purification of water containing MC-LR. PMID:25884038
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirano, Teruyuki; Winn, Joshua N.; Albrecht, Simon
We present an improved formula for the anomalous radial velocity of the star during planetary transits due to the Rossiter-McLaughlin (RM) effect. The improvement comes from a more realistic description of the stellar absorption line profiles, taking into account stellar rotation, macroturbulence, thermal broadening, pressure broadening, and instrumental broadening. Although the formula is derived for the case in which radial velocities are measured by cross-correlation, we show through numerical simulations that the formula accurately describes the cases where the radial velocities are measured with the iodine absorption-cell technique. The formula relies on prior knowledge of the parameters describing macroturbulence, instrumentalmore » broadening, and other broadening mechanisms, but even 30% errors in those parameters do not significantly change the results in typical circumstances. We show that the new analytic formula agrees with previous ones that had been computed on a case-by-case basis via numerical simulations. Finally, as one application of the new formula, we reassess the impact of the differential rotation on the RM velocity anomaly. We show that differential rotation of a rapidly rotating star may have a significant impact on future RM observations.« less
S. Youssefian; J. E. Jakes; N. Rahbar
2017-01-01
A combination of experimental, theoretical and numerical studies is used to investigate the variation of elastic moduli of lignocellulosic (bamboo) fiber cell walls with moisture content (MC). Our Nanoindentation results show that the longitudinal elastic modulus initially increased to a maximum value at about 3% MC and then decreased linearly with increasing MC. In...
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1998-01-01
A prognostic cloud scheme named McRAS (Microphysics of clouds with Relaxed Arakawa-Schubert Scheme) was developed with the aim of improving cloud-microphysics, and cloud-radiation interactions in GCMs. McRAS distinguishes convective, stratiform, and boundary-layer clouds. The convective clouds merge into stratiform clouds on an hourly time-scale, while the boundary-layer clouds do so instantly. The cloud condensate transforms into precipitation following the auto-conversion relations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, and diffuse both horizontally and vertically with a fully active cloud-microphysics throughout its life-cycle, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry. An evaluation of McRAS in a single column model (SCM) with the GATE Phase III data has shown that McRAS can simulate the observed temperature, humidity, and precipitation without discernible systematic errors. An evaluation with the ARM-CART SCM data in a cloud model intercomparison exercise shows reasonable but not an outstanding accurate simulation. Such a discrepancy is common to almost all models and is related, in part, to the input data quality. McRAS was implemented in the GEOS II GCM. A 50 month integration that was initialized with the ECMWF analysis of observations for January 1, 1987 and forced with the observed sea-surface temperatures and sea-ice distribution and vegetation properties (biomes, and soils), with prognostic soil moisture, snow-cover, and hydrology showed a very realistic simulation of cloud process, incloud water and ice, and cloud-radiative forcing (CRF). The simulated ITCZ showed a realistic time-mean structure and seasonal cycle, while the simulated CRF showed sensitivity to vertical distribution of cloud water which can be easily altered by the choice of time constant and incloud critical cloud water amount regulators for auto-conversion. The CRF and its feedbacks also have a profound effect on the ITCZ. Even though somewhat weaker than observed, the McRAS-GCM simulation produces robust 30-60 day oscillations in the 200 hPa velocity potential. Two ensembles of 4-summer (July, August, September) simulations, one each for 1987 and 1988 show that the McRAS-GCM simulates realistic and statistically significant precipitation differences over India, Central America, and tropical Africa. Several seasonal simulations were performed with McRAS-GEOS II GCM for the summer (June-July- August) and winter (December-January-February) periods to determine how the simulated clouds and CRFs would be affected by: i) advection of clouds; ii) cloud top entrainment instability, iii) cloud water inhomogeneity correction, and (iv) cloud production and dissipation in different cloud-processes. The results show that each of these processes contributes to the simulated cloud-fraction and CRF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abouelnasr, MKF; Smit, B
2012-01-01
The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) Amore » new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective-diffusion behavior that was reproduced with kMC simulations.« less
Abouelnasr, Mahmoud K F; Smit, Berend
2012-09-07
The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) A new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective- diffusion behavior that was reproduced with kMC simulations.
Effect of accelerated global expansion on the bending of light
NASA Astrophysics Data System (ADS)
Aghili, Mir Emad; Bolen, Brett; Bombelli, Luca
2017-01-01
In 2007 Rindler and Ishak showed that, contrary to previous claims, the value of the cosmological constant does have an effect on light deflection by a gravitating object in an expanding universe. In their work they considered a Schwarzschild-de Sitter (SdS) spacetime, which has a constant asymptotic expansion rate H_0. A model with a time-dependent H( t) was studied by Kantowski et al., who consider in their 2010 paper a "Swiss-cheese" model of a Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime with an embedded SdS bubble. In this paper, we generalize the Rindler and Ishak model to time-varying H( t) in another way, by considering light bending in a McVittie metric representing a gravitating object in a FLRW cosmological background. We carry out numerical simulations of the propagation of null geodesics in different McVittie spacetimes, in which we keep the values of the distances from the observer to the lensing object and to the source fixed, and vary the form of H( t).
Development of a polarized neutron beam line at Algerian research reactors using McStas software
NASA Astrophysics Data System (ADS)
Makhloufi, M.; Salah, H.
2017-02-01
Unpolarized instrumentation has long been studied and designed using McStas simulation tool. But, only recently new models were developed for McStas to simulate polarized neutron scattering instruments. In the present contribution, we used McStas software to design a polarized neutron beam line, taking advantage of the available spectrometers reflectometer and diffractometer in Algeria. Both thermal and cold neutron was considered. The polarization was made by two types of supermirrors polarizers FeSi and CoCu provided by the HZB institute. For sake of performance and comparison, the polarizers were characterized and their characteristics reproduced. The simulated instruments are reported. Flipper and electromagnets for guide field are developed. Further developments including analyzers and upgrading of the existing spectrometers are underway.
Comparison of Fluka-2006 Monte Carlo Simulation and Flight Data for the ATIC Detector
NASA Technical Reports Server (NTRS)
Gunasingha, R.M.; Fazely, A.R.; Adams, J.H.; Ahn, H.S.; Bashindzhagyan, G.L.; Chang, J.; Christl, M.; Ganel, O.; Guzik, T.G.; Isbert, J.;
2007-01-01
We have performed a detailed Monte Carlo (MC) simulation for the Advanced Thin Ionization Calorimeter (ATIC) detector using the MC code FLUKA-2006 which is capable of simulating particles up to 10 PeV. The ATIC detector has completed two successful balloon flights from McMurdo, Antarctica lasting a total of more than 35 days. ATIC is designed as a multiple, long duration balloon flight, investigation of the cosmic ray spectra from below 50 GeV to near 100 TeV total energy; using a fully active Bismuth Germanate(BGO) calorimeter. It is equipped with a large mosaic of.silicon detector pixels capable of charge identification, and, for particle tracking, three projective layers of x-y scintillator hodoscopes, located above, in the middle and below a 0.75 nuclear interaction length graphite target. Our simulations are part of an analysis package of both nuclear (A) and energy dependences for different nuclei interacting in the ATIC detector. The MC simulates the response of different components of the detector such as the Si-matrix, the scintillator hodoscopes and the BGO calorimeter to various nuclei. We present comparisons of the FLUKA-2006 MC calculations with GEANT calculations and with the ATIC CERN data and ATIC flight data.
New developments in the McStas neutron instrument simulation package
NASA Astrophysics Data System (ADS)
Willendrup, P. K.; Knudsen, E. B.; Klinkby, E.; Nielsen, T.; Farhi, E.; Filges, U.; Lefmann, K.
2014-07-01
The McStas neutron ray-tracing software package is a versatile tool for building accurate simulators of neutron scattering instruments at reactors, short- and long-pulsed spallation sources such as the European Spallation Source. McStas is extensively used for design and optimization of instruments, virtual experiments, data analysis and user training. McStas was founded as a scientific, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of the main new developments in McStas 2.0 (December 2012) and McStas 2.1 (expected fall 2013), including many new components, component parameter uniformisation, partial loss of backward compatibility, updated source brilliance descriptions, developments toward new tools and user interfaces, web interfaces and a new method for estimating beam losses and background from neutron optics.
Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G
2013-01-01
In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation.
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
Mashburn, Shana L.; Ryter, Derek W.; Neel, Christopher R.; Smith, S. Jerrod; Magers, Jessica S.
2014-02-10
The Central Oklahoma (Garber-Wellington) aquifer underlies about 3,000 square miles of central Oklahoma. The study area for this investigation was the extent of the Central Oklahoma aquifer. Water from the Central Oklahoma aquifer is used for public, industrial, commercial, agricultural, and domestic supply. With the exception of Oklahoma City, all of the major communities in central Oklahoma rely either solely or partly on groundwater from this aquifer. The Oklahoma City metropolitan area, incorporating parts of Canadian, Cleveland, Grady, Lincoln, Logan, McClain, and Oklahoma Counties, has a population of approximately 1.2 million people. As areas are developed for groundwater supply, increased groundwater withdrawals may result in decreases in long-term aquifer storage. The U.S. Geological Survey, in cooperation with the Oklahoma Water Resources Board, investigated the hydrogeology and simulated groundwater flow in the aquifer using a numerical groundwater-flow model. The purpose of this report is to describe an investigation of the Central Oklahoma aquifer that included analyses of the hydrogeology, hydrogeologic framework of the aquifer, and construction of a numerical groundwater-flow model. The groundwater-flow model was used to simulate groundwater levels and for water-budget analysis. A calibrated transient model was used to evaluate changes in groundwater storage associated with increased future water demands.
Prediction of charge mobility in organic semiconductors with consideration of the grain-size effect
NASA Astrophysics Data System (ADS)
Park, Jin Woo; Lee, Kyu Il; Choi, Youn-Suk; Kim, Jung-Hwa; Jeong, Daun; Kwon, Young-Nam; Park, Jong-Bong; Ahn, Ho Young; Park, Jeong-Il; Lee, Hyo Sug; Shin, Jaikwang
2016-09-01
A new computational model to predict the hole mobility of poly-crystalline organic semiconductors in thin film was developed (refer to Phys. Chem. Chem. Phys., 2016, DOI: 10.1039/C6CP02993K). Site energy differences and transfer integrals in crystalline morphologies of organic molecules were obtained from quantum chemical calculation, in which the periodic boundary condition was efficiently applied to capture the interactions with the surrounding molecules in the crystalline organic layer. Then the parameters were employed in kinetic Monte Carlo (kMC) simulations to estimate the carrier mobility. Carrier transport in multiple directions has been considered in the kMC simulation to mimic polycrystalline characteristic in thin-film condition. Furthermore, the calculated mobility was corrected with a calibration equation based on the microscopic images of thin films to take the effect of grain boundary into account. As a result, good agreement was observed between the predicted and measured hole mobility values for 21 molecular species: the coefficient of determination (R2) was estimated to be 0.83 and the mean absolute error was 1.32 cm2 V-1 s-1. This numerical approach can be applied to any molecules for which crystal structures are available and will provide a rapid and precise way of predicting the device performance.
Scalar mixing in LES/PDF of a high-Ka premixed turbulent jet flame
NASA Astrophysics Data System (ADS)
You, Jiaping; Yang, Yue
2016-11-01
We report a large-eddy simulation (LES)/probability density function (PDF) study of a high-Ka premixed turbulent flame in the Lund University Piloted Jet (LUPJ) flame series, which has been investigated using direct numerical simulation (DNS) and experiments. The target flame, featuring broadened preheat and reaction zones, is categorized into the broken reaction zone regime. In the present study, three widely used mixing modes, namely the Interaction by Exchange with the Mean (IEM), Modified Curl (MC), and Euclidean Minimum Spanning Tree (EMST) models are applied to assess their performance through detailed a posteriori comparisons with DNS. A dynamic model for the time scale of scalar mixing is formulated to describe the turbulent mixing of scalars at small scales. Better quantitative agreement for the mean temperature and mean mass fractions of major and minor species are obtained with the MC and EMST models than with the IEM model. The multi-scalar mixing in composition space with the three models are analyzed to assess the modeling of the conditional molecular diffusion term. In addition, we demonstrate that the product of OH and CH2O concentrations can be a good surrogate of the local heat release rate in this flame. This work is supported by the National Natural Science Foundation of China (Grant Nos. 11521091 and 91541204).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Häggström, Ida, E-mail: haeggsti@mskcc.org; Beattie, Bradley J.; Schmidtlein, C. Ross
2016-06-15
Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuationmore » are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dPETSTEP can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.« less
Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John
2018-05-01
The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan
2017-07-01
The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
Application of the MCNPX-McStas interface for shielding calculations and guide design at ESS
NASA Astrophysics Data System (ADS)
Klinkby, E. B.; Knudsen, E. B.; Willendrup, P. K.; Lauritzen, B.; Nonbøl, E.; Bentley, P.; Filges, U.
2014-07-01
Recently, an interface between the Monte Carlo code MCNPX and the neutron ray-tracing code MCNPX was developed [1, 2]. Based on the expected neutronic performance and guide geometries relevant for the ESS, the combined MCNPX-McStas code is used to calculate dose rates along neutron beam guides. The generation and moderation of neutrons is simulated using a full scale MCNPX model of the ESS target monolith. Upon entering the neutron beam extraction region, the individual neutron states are handed to McStas via the MCNPX-McStas interface. McStas transports the neutrons through the beam guide, and by using newly developed event logging capability, the neutron state parameters corresponding to un-reflected neutrons are recorded at each scattering. This information is handed back to MCNPX where it serves as neutron source input for a second MCNPX simulation. This simulation enables calculation of dose rates in the vicinity of the guide. In addition the logging mechanism is employed to record the scatterings along the guides which is exploited to simulate the supermirror quality requirements (i.e. m-values) needed at different positions along the beam guide to transport neutrons in the same guide/source setup.
Lu, Zeqin; Jhoja, Jaspreet; Klein, Jackson; Wang, Xu; Liu, Amy; Flueckiger, Jonas; Pond, James; Chrostowski, Lukas
2017-05-01
This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.
NASA Astrophysics Data System (ADS)
Jung, Hyunuk; Shin, Jungsuk; Chung, Kwangzoo; Han, Youngyih; Kim, Jinsung; Choi, Doo Ho
2015-05-01
The aim of this study was to develop an independent dose verification system by using a Monte Carlo (MC) calculation method for intensity modulated radiation therapy (IMRT) conducted by using a Varian Novalis Tx (Varian Medical Systems, Palo Alto, CA, USA) equipped with a highdefinition multi-leaf collimator (HD-120 MLC). The Geant4 framework was used to implement a dose calculation system that accurately predicted the delivered dose. For this purpose, the Novalis Tx Linac head was modeled according to the specifications acquired from the manufacturer. Subsequently, MC simulations were performed by varying the mean energy, energy spread, and electron spot radius to determine optimum values of irradiation with 6-MV X-ray beams by using the Novalis Tx system. Computed percentage depth dose curves (PDDs) and lateral profiles were compared to the measurements obtained by using an ionization chamber (CC13). To validate the IMRT simulation by using the MC model we developed, we calculated a simple IMRT field and compared the result with the EBT3 film measurements in a water-equivalent solid phantom. Clinical cases, such as prostate cancer treatment plans, were then selected, and MC simulations were performed. The accuracy of the simulation was assessed against the EBT3 film measurements by using a gamma-index criterion. The optimal MC model parameters to specify the beam characteristics were a 6.8-MeV mean energy, a 0.5-MeV energy spread, and a 3-mm electron radius. The accuracy of these parameters was determined by comparison of MC simulations with measurements. The PDDs and the lateral profiles of the MC simulation deviated from the measurements by 1% and 2%, respectively, on average. The computed simple MLC fields agreed with the EBT3 measurements with a 95% passing rate with 3%/3-mm gamma-index criterion. Additionally, in applying our model to clinical IMRT plans, we found that the MC calculations and the EBT3 measurements agreed well with a passing rate of greater than 95% on average with a 3%/3-mm gamma-index criterion. In summary, the Novalis Tx Linac head equipped with a HD-120 MLC was successfully modeled by using a Geant4 platform, and the accuracy of the Geant4 platform was successfully validated by comparisons with measurements. The MC model we have developed can be a useful tool for pretreatment quality assurance of IMRT plans and for commissioning of radiotherapy treatment planning.
OpenMC In Situ Source Convergence Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee
2016-05-07
We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less
Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2005-01-01
Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.
Borzov, Egor; Daniel, Shahar; Bar‐Deroma, Raquel
2016-01-01
Total skin electron irradiation (TSEI) is a complex technique which requires many nonstandard measurements and dosimetric procedures. The purpose of this work was to validate measured dosimetry data by Monte Carlo (MC) simulations using EGSnrc‐based codes (BEAMnrc and DOSXYZnrc). Our MC simulations consisted of two major steps. In the first step, the incident electron beam parameters (energy spectrum, FWHM, mean angular spread) were adjusted to match the measured data (PDD and profile) at SSD=100 cm for an open field. In the second step, these parameters were used to calculate dose distributions at the treatment distance of 400 cm. MC simulations of dose distributions from single and dual fields at the treatment distance were performed in a water phantom. Dose distribution from the full treatment with six dual fields was simulated in a CT‐based anthropomorphic phantom. MC calculations were compared to the available set of measurements used in clinical practice. For one direct field, MC calculated PDDs agreed within 3%/1 mm with the measurements, and lateral profiles agreed within 3% with the measured data. For the OF, the measured and calculated results were within 2% agreement. The optimal angle of 17° was confirmed for the dual field setup. Dose distribution from the full treatment with six dual fields was simulated in a CT‐based anthropomorphic phantom. The MC‐calculated multiplication factor (B12‐factor), which relates the skin dose for the whole treatment to the dose from one calibration field, for setups with and without degrader was 2.9 and 2.8, respectively. The measured B12‐factor was 2.8 for both setups. The difference between calculated and measured values was within 3.5%. It was found that a degrader provides more homogeneous dose distribution. The measured X‐ray contamination for the full treatment was 0.4%; this is compared to the 0.5% X‐ray contamination obtained with the MC calculation. Feasibility of MC simulation in an anthropomorphic phantom for a full TSEI treatment was proved and is reported for the first time in the literature. The results of our MC calculations were found to be in general agreement with the measurements, providing a promising tool for further studies of dose distribution calculations in TSEI. PACS number(s): 87.10. Rt, 87.55.K, 87.55.ne PMID:27455502
Palombo, Marco; Gentili, Silvia; Bozzali, Marco; Macaluso, Emiliano; Capuani, Silvia
2015-05-01
In this MRI study, diffusional kurtosis imaging (DKI) and T2 * multiecho relaxometry were measured from the white matter (WM) of human brains and correlated with each other, with the aim of investigating the influence of magnetic-susceptibility (Δχ (H2O-TISSUE) ) on the contrast. We focused our in vivo analysis on assessing the dependence of mean, axial, and radial kurtosis (MK, K‖ , K⊥ ), as well as DTI indices on Δχ (H2O-TISSUE) (quantified by T2 *) between extracellular water and WM tissue molecules. Moreover, Monte Carlo (MC) simulations were used to elucidate experimental data. A significant positive correlation was observed between K⊥ , MK and R2 * = 1/T2 *, suggesting that Δχ (H2O-TISSUE) could be a source of DKI contrast. In this view, K⊥ and MK-map contrasts in human WM would not just be due to different restricted diffusion processes of compartmentalized water but also to local Δχ (H2O-TISSUE) . However, MC simulations show a strong dependence on microstructure rearrangement and a feeble dependence on Δχ (H2O-TISSUE) of DKI signal. Our results suggests a concomitant and complementary existence of multi-compartmentalized diffusion process and Δχ (H2O-TISSUE) in DKI contrast that might explain why kurtosis contrast is more sensitive than DTI in discriminating between different tissues. However, more realistic numerical simulations are needed to confirm this statement. © 2014 Wiley Periodicals, Inc.
Diffusion of interacting particles in discrete geometries: Equilibrium and dynamical properties
NASA Astrophysics Data System (ADS)
Becker, T.; Nelissen, K.; Cleuren, B.; Partoens, B.; Van den Broeck, C.
2014-11-01
We expand on a recent study of a lattice model of interacting particles [Phys. Rev. Lett. 111, 110601 (2013), 10.1103/PhysRevLett.111.110601]. The adsorption isotherm and equilibrium fluctuations in particle number are discussed as a function of the interaction. Their behavior is similar to that of interacting particles in porous materials. Different expressions for the particle jump rates are derived from transition-state theory. Which expression should be used depends on the strength of the interparticle interactions. Analytical expressions for the self- and transport diffusion are derived when correlations, caused by memory effects in the environment, are neglected. The diffusive behavior is studied numerically with kinetic Monte Carlo (kMC) simulations, which reproduces the diffusion including correlations. The effect of correlations is studied by comparing the analytical expressions with the kMC simulations. It is found that the Maxwell-Stefan diffusion can exceed the self-diffusion. To our knowledge, this is the first time this is observed. The diffusive behavior in one-dimensional and higher-dimensional systems is qualitatively the same, with the effect of correlations decreasing for increasing dimension. The length dependence of both the self- and transport diffusion is studied for one-dimensional systems. For long lengths the self-diffusion shows a 1 /L dependence. Finally, we discuss when agreement with experiments and simulations can be expected. The assumption that particles in different cavities do not interact is expected to hold quantitatively at low and medium particle concentrations if the particles are not strongly interacting.
NASA Astrophysics Data System (ADS)
Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei
2018-02-01
The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.
Improved QM Methods and Their Application in QM/MM Studies of Enzymatic Reactions
NASA Astrophysics Data System (ADS)
Jorgensen, William L.
2007-03-01
Quantum mechanics (QM) and Monte Carlo statistical mechanics (MC) simulations have been used by us since the early 1980s to study reaction mechanisms and the origin of solvent effects on reaction rates. A goal was always to perform the QM and MC/MM calculations simultaneously in order to obtain free-energy surfaces in solution with no geometrical restrictions. This was achieved by 2002 and complete free-energy profiles and surfaces with full sampling of solute and solvent coordinates can now be obtained through one job submission using BOSS [JCC 2005, 26, 1689]. Speed and accuracy demands also led to development of the improved semiempirical QM method, PDDG-PM3 [JCC 1601 (2002); JCTC 817 (2005)]. The combined PDDG-PM3/MC/FEP methodology has provided excellent results for free energies of activation for many reactions in numerous solvents. Recent examples include Cope, Kemp and E1cb eliminations [JACS 8829 (2005), 6141 (2006); JOC 4896 (2006)], as well as enzymatic reactions catalyzed by the putative Diels-Alderase, macrophomate synthase, and fatty-acid amide hydrolase [JACS 3577 (2005); JACS (2006)]. The presentation will focus on the accuracy that is currently achievable in such QM/MM studies and the accuracy of the underlying QM methodology including extensive comparisons of results from PDDG-PM3 and ab initio DFT methods.
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising
Guo, Muran; Chen, Tao; Wang, Ben
2017-01-01
Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.
Guo, Muran; Chen, Tao; Wang, Ben
2017-05-16
Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazalova-Carter, Magdalena; Liu, Michael; Palma, Bianey
2015-04-15
Purpose: To measure radiation dose in a water-equivalent medium from very high-energy electron (VHEE) beams and make comparisons to Monte Carlo (MC) simulation results. Methods: Dose in a polystyrene phantom delivered by an experimental VHEE beam line was measured with Gafchromic films for three 50 MeV and two 70 MeV Gaussian beams of 4.0–6.9 mm FWHM and compared to corresponding MC-simulated dose distributions. MC dose in the polystyrene phantom was calculated with the EGSnrc/BEAMnrc and DOSXYZnrc codes based on the experimental setup. Additionally, the effect of 2% beam energy measurement uncertainty and possible non-zero beam angular spread on MC dosemore » distributions was evaluated. Results: MC simulated percentage depth dose (PDD) curves agreed with measurements within 4% for all beam sizes at both 50 and 70 MeV VHEE beams. Central axis PDD at 8 cm depth ranged from 14% to 19% for the 5.4–6.9 mm 50 MeV beams and it ranged from 14% to 18% for the 4.0–4.5 mm 70 MeV beams. MC simulated relative beam profiles of regularly shaped Gaussian beams evaluated at depths of 0.64 to 7.46 cm agreed with measurements to within 5%. A 2% beam energy uncertainty and 0.286° beam angular spread corresponded to a maximum 3.0% and 3.8% difference in depth dose curves of the 50 and 70 MeV electron beams, respectively. Absolute dose differences between MC simulations and film measurements of regularly shaped Gaussian beams were between 10% and 42%. Conclusions: The authors demonstrate that relative dose distributions for VHEE beams of 50–70 MeV can be measured with Gafchromic films and modeled with Monte Carlo simulations to an accuracy of 5%. The reported absolute dose differences likely caused by imperfect beam steering and subsequent charge loss revealed the importance of accurate VHEE beam control and diagnostics.« less
Kim, Sangroh; Yoshizumi, Terry; Toncheva, Greta; Yoo, Sua; Yin, Fang-Fang; Frush, Donald
2010-05-01
To address the lack of accurate dose estimation method in cone beam computed tomography (CBCT), we performed point dose metal oxide semiconductor field-effect transistor (MOSFET) measurements and Monte Carlo (MC) simulations. A Varian On-Board Imager (OBI) was employed to measure point doses in the polymethyl methacrylate (PMMA) CT phantoms with MOSFETs for standard and low dose modes. A MC model of the OBI x-ray tube was developed using BEAMnrc/EGSnrc MC system and validated by the half value layer, x-ray spectrum and lateral and depth dose profiles. We compared the weighted computed tomography dose index (CTDIw) between MOSFET measurements and MC simulations. The CTDIw was found to be 8.39 cGy for the head scan and 4.58 cGy for the body scan from the MOSFET measurements in standard dose mode, and 1.89 cGy for the head and 1.11 cGy for the body in low dose mode, respectively. The CTDIw from MC compared well to the MOSFET measurements within 5% differences. In conclusion, a MC model for Varian CBCT has been established and this approach may be easily extended from the CBCT geometry to multi-detector CT geometry.
Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R
2014-03-01
Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Santander, Julian E; Tsapatsis, Michael; Auerbach, Scott M
2013-04-16
We have constructed and applied an algorithm to simulate the behavior of zeolite frameworks during liquid adsorption. We applied this approach to compute the adsorption isotherms of furfural-water and hydroxymethyl furfural (HMF)-water mixtures adsorbing in silicalite zeolite at 300 K for comparison with experimental data. We modeled these adsorption processes under two different statistical mechanical ensembles: the grand canonical (V-Nz-μg-T or GC) ensemble keeping volume fixed, and the P-Nz-μg-T (osmotic) ensemble allowing volume to fluctuate. To optimize accuracy and efficiency, we compared pure Monte Carlo (MC) sampling to hybrid MC-molecular dynamics (MD) simulations. For the external furfural-water and HMF-water phases, we assumed the ideal solution approximation and employed a combination of tabulated data and extended ensemble simulations for computing solvation free energies. We found that MC sampling in the V-Nz-μg-T ensemble (i.e., standard GCMC) does a poor job of reproducing both the Henry's law regime and the saturation loadings of these systems. Hybrid MC-MD sampling of the V-Nz-μg-T ensemble, which includes framework vibrations at fixed total volume, provides better results in the Henry's law region, but this approach still does not reproduce experimental saturation loadings. Pure MC sampling of the osmotic ensemble was found to approach experimental saturation loadings more closely, whereas hybrid MC-MD sampling of the osmotic ensemble quantitatively reproduces such loadings because the MC-MD approach naturally allows for locally anisotropic volume changes wherein some pores expand whereas others contract.
Simulating x-ray telescopes with McXtrace: a case study of ATHENA's optics
NASA Astrophysics Data System (ADS)
Ferreira, Desiree D. M.; Knudsen, Erik B.; Westergaard, Niels J.; Christensen, Finn E.; Massahi, Sonny; Shortt, Brian; Spiga, Daniele; Solstad, Mathias; Lefmann, Kim
2016-07-01
We use the X-ray ray-tracing package McXtrace to simulate the performance of X-ray telescopes based on Silicon Pore Optics (SPO) technologies. We use as reference the design of the optics of the planned X-ray mission Advanced Telescope for High ENergy Astrophysics (ATHENA) which is designed as a single X-ray telescope populated with stacked SPO substrates forming mirror modules to focus X-ray photons. We show that is possible to simulate in detail the SPO pores and qualify the use of McXtrace for in-depth analysis of in-orbit performance and laboratory X-ray test results.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Monte Carlo simulations of neutron-scattering instruments using McStas
NASA Astrophysics Data System (ADS)
Nielsen, K.; Lefmann, K.
2000-06-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.
Depolarization of an Ultrashort Pulse in a Disordered Ensemble of Mie Particles
NASA Astrophysics Data System (ADS)
Gorodnichev, E. E.; Ivliev, S. V.; Kuzovlev, A. I.; Rogozkin, D. B.
2017-12-01
We study propagation of an ultrashort pulse of polarized light through a turbid medium with the Reynolds-McCormick phase function. Within the basic mode approach to the vector radiative transfer equation, the temporal profile of the degree of polarization is calculated analytically with the use of the small-angle approximation. The degree of polarization is shown to be described by the self-similar dependence on some combination of the transport scattering coefficient, the temporal delay and the sample thickness. Our results are in excellent agreement with the data of numerical simulations carried out previously for aqueous suspension of polystyrene microspheres.
Characterization of plasma wake excitation and particle trapping in the nonlinear bubble regime
NASA Astrophysics Data System (ADS)
Benedetti, Carlo; Schroeder, Carl; Esarey, Eric; Leemans, Wim
2010-11-01
We investigate the excitation of nonlinear wake (bubble) formation by an ultra-short (kpL ˜2), intense (e Alaser/mc^2 > 2) laser pulse interacting with an underdense plasma. A detailed analysis of particle orbits in the wakefield is performed by using reduced analytical models and numerical simulations performed with the 2D cylindrical, envelope, ponderomotive, hybrid PIC/fluid code INF&RNO, recently developed at LBNL. In particular we study the requirements for injection and/or trapping of background plasma electrons in the nonlinear wake. Characterization of the phase-space properties of the injected particle bunch will also be discussed.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations
Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi
2016-01-01
Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775
Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi
2016-01-01
Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.
In-simulator training of driving abilities in a person with a traumatic brain injury.
Gamache, Pierre-Luc; Lavallière, Martin; Tremblay, Mathieu; Simoneau, Martin; Teasdale, Normand
2011-01-01
This study reports the case of a 23-year-old woman (MC) who sustained a severe traumatic brain injury in 2004. After her accident, her driving license was revoked. Despite recovering normal neuropsychological functions in the following years, MC was unable to renew her license, failing four on-road evaluations assessing her fitness to drive. In hope of an eventual license renewal, MC went through an in-simulator training programme in the laboratory in 2009. The training programme aimed at improving features of MC's driving behaviour that were identified as being problematic in prior on-road evaluations. To do so, proper driving behaviour was reinforced via driving-specific feedback provided during the training sessions. After 25 sessions in the simulator (over a period of 4 months), MC significantly improved various components of her driving. Notably, compared to early sessions, later ones were associated with a reduced cognitive load, less jerky speed profiles when stopping at intersections and better vehicle control and positioning. A 1-year retention test showed most of these improvements were consistent. The learning principles underlying well conducted simulator-based education programmes have a strong scientific basis. A simulator training programme like this one represents a promising avenue for driving rehabilitation. It allows individuals without a driving license to practice and improve their skills in a safe and realistic environment.
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation.
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-28
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ϵ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation
NASA Astrophysics Data System (ADS)
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-01
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ɛ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
Melt focusing and CO2 extraction at mid-ocean ridges: simulations of reactive two-phase flow
NASA Astrophysics Data System (ADS)
Keller, T.; Katz, R. F.; Hirschmann, M. M.
2016-12-01
The deep CO2 cycle is the result of fluxes between near-surface and mantle reservoirs. Outgassing from mid-ocean ridges is one of the primary fluxes of CO2 from the asthenosphere into the ocean-atmosphere reservoir. Focusing of partial melt to the ridge axis crucially controls this flux. However, the role of volatiles, in particular CO2 and H2O, on melt transport processes beneath ridges remains poorly understood. We investigate this transport using numerical simulations of two-phase, multi-component magma/mantle dynamics. The phases are solid mantle and liquid magma; the components are dunite, MORB, hydrated basalt, and carbonated basalt. These effective components capture accepted features of mantle melting with volatiles. The fluid-dynamical model is McKenzie's formulation [1], while melting and reactive transport use the R_DMC method [2,3]. Our results indicate that volatiles cause channelized melt transport, which leads to significant variability in volume and composition of focused melt. The volatile-induced expansion of the melting regime at depth, however, has no influence on melt focusing; distal volatile-rich melts are not focused to the axis. Up to 50% of these melts are instead emplaced along the oceanic LAB. There, crystallization of accumulated melt leads to enrichment of CO2 and H2O in the deep lithosphere, which has implications for LAB rheology and volatile recycling by subduction. Results from a suite of simulations, constrained by catalogued observational data [4,5,6] enable predictions of global MOR CO2 output. By combining observational constraints with self-consistent numerical simulations we obtain a range of CO2 output from the global ridge system of 28-110 Mt CO2/yr, corresponding to mean CO2 contents of 50-200 ppm in the mantle. REFERENCES[1] McKenzie (1984), doi:10.1093/petrology/25.3.713.[2] Rudge, Bercovici & Spiegelman (2011), doi:10.1111/j.1365-246X.2010.04870.x.[3] Keller & Katz (2016), doi:10.1093/petrology/egw030.[4] Dalton, Langmuir & Gale (2014), doi:10.1126/science.1249466.[5] Gale, Langmuir & Dalton (2014), doi:10.1093/petrology/egu017.[6] White et al. (2001), doi:10.1093/petrology/42.6.1171. Fig: Simulation results of MOR magma/mantle dynamics with H2O and CO2, showing Darcy flux magnitude for half-spreading rates of 1 and 5 cm/yr.
Enhanced Master Controller Unit Tester
NASA Technical Reports Server (NTRS)
Benson, Patricia; Johnson, Yvette; Johnson, Brian; Williams, Philip; Burton, Geoffrey; McCoy, Anthony
2007-01-01
The Enhanced Master Controller Unit Tester (EMUT) software is a tool for development and testing of software for a master controller (MC) flight computer. The primary function of the EMUT software is to simulate interfaces between the MC computer and external analog and digital circuitry (including other computers) in a rack of equipment to be used in scientific experiments. The simulations span the range of nominal, off-nominal, and erroneous operational conditions, enabling the testing of MC software before all the equipment becomes available.
NASA Astrophysics Data System (ADS)
Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles
2017-10-01
RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance = 3%, distance-to-agreement = 3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.
Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles
2017-09-12
RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance = 3%, distance-to-agreement = 3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.
Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation
NASA Astrophysics Data System (ADS)
Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong
2018-02-01
Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics.
Monte Carlo Simulations: Number of Iterations and Accuracy
2015-07-01
iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC
Thunder on the Right: Past and Present.
ERIC Educational Resources Information Center
Morris, Robert C.
1978-01-01
Comparing present day criticisms of U.S. education with those lodged during the McCarthy era, this article warns educators of comparable McCarthy tactics today, concluding that educators must "continually attempt to understand and cope with the numerous criticisms lodged against our schools". (JC)
NASA Astrophysics Data System (ADS)
Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K.
2018-01-01
Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is < ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.
Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K
2017-12-19
Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is < ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.
Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang
2010-03-01
The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
47 CFR 73.201 - Numerical designation of FM broadcast channels.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Numerical designation of FM broadcast channels... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.201 Numerical designation of FM broadcast... numerical designations which are shown in the table below: Frequency (Mc/s) Channel No. 88.1 201 88.3 202 88...
47 CFR 73.201 - Numerical designation of FM broadcast channels.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false Numerical designation of FM broadcast channels... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.201 Numerical designation of FM broadcast... numerical designations which are shown in the table below: Frequency (Mc/s) Channel No. 88.1 201 88.3 202 88...
47 CFR 73.201 - Numerical designation of FM broadcast channels.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Numerical designation of FM broadcast channels... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.201 Numerical designation of FM broadcast... numerical designations which are shown in the table below: Frequency (Mc/s) Channel No. 88.1 201 88.3 202 88...
A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14
NASA Astrophysics Data System (ADS)
Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.
2000-03-01
Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.
THE McGill PLANAR HYDROGEN ATMOSPHERE CODE (McPHAC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.
2012-04-10
The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation, at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; andmore » (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to {approx}1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to {approx}<1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T{sub eff} < 10{sup 5.6} K, though even there it may not be of much practical importance for most observations.« less
The McGill Planar Hydrogen Atmosphere Code (McPHAC)
NASA Astrophysics Data System (ADS)
Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.; Rutledge, Robert E.
2012-04-01
The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation, at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.
McPHAC: McGill Planar Hydrogen Atmosphere Code
NASA Astrophysics Data System (ADS)
Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.; Rutledge, Robert E.
2012-10-01
The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.
Monte Carlo simulations in radiotherapy dosimetry.
Andreo, Pedro
2018-06-27
The use of the Monte Carlo (MC) method in radiotherapy dosimetry has increased almost exponentially in the last decades. Its widespread use in the field has converted this computer simulation technique in a common tool for reference and treatment planning dosimetry calculations. This work reviews the different MC calculations made on dosimetric quantities, like stopping-power ratios and perturbation correction factors required for reference ionization chamber dosimetry, as well as the fully realistic MC simulations currently available on clinical accelerators, detectors and patient treatment planning. Issues are raised that include the necessity for consistency in the data throughout the entire dosimetry chain in reference dosimetry, and how Bragg-Gray theory breaks down for small photon fields. Both aspects are less critical for MC treatment planning applications, but there are important constraints like tissue characterization and its patient-to-patient variability, which together with the conversion between dose-to-water and dose-to-tissue, are analysed in detail. Although these constraints are common to all methods and algorithms used in different types of treatment planning systems, they make uncertainties involved in MC treatment planning to still remain "uncertain".
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Justin Matthew
These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less
LES of Temporally Evolving Mixing Layers by Three High Order Schemes
NASA Astrophysics Data System (ADS)
Yee, H.; Sjögreen, B.; Hadjadj, A.
2011-10-01
The performance of three high order shock-capturing schemes is compared for large eddy simulations (LES) of temporally evolving mixing layers for different convective Mach number (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7), and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (Yee & Sjögreen 2009) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) by Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.
Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.
OneSAF as an In-Stride Mission Command Asset
2014-06-01
implementation approach. While DARPA began with a funded project to complete the capability as a “ big bang ” approach the approach here is based on reuse and...Command (MC), Modeling and Simulation (M&S), Distributed Interactive Simulation (DIS) ABSTRACT: To provide greater interoperability and integration...within Mission Command (MC) Systems the One Semi-Automated Forces (OneSAF) entity level simulation is evolving from a tightly coupled client server
Computer Simulation of Electron Thermalization in CsI and CsI(Tl)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhiguo; Xie, YuLong; Cannon, Bret D.
2011-09-15
A Monte Carlo (MC) model was developed and implemented to simulate the thermalization of electrons in inorganic scintillator materials. The model incorporates electron scattering with both longitudinal optical and acoustic phonons. In this paper, the MC model was applied to simulate electron thermalization in CsI, both pure and doped with a range of thallium concentrations. The inclusion of internal electric fields was shown to increase the fraction of recombined electron-hole pairs and to broaden the thermalization distance and thermalization time distributions. The MC simulations indicate that electron thermalization, following {gamma}-ray excitation, takes place within approximately 10 ps in CsI andmore » that electrons can travel distances up to several hundreds of nanometers. Electron thermalization was studied for a range of incident {gamma}-ray energies using electron-hole pair spatial distributions generated by the MC code NWEGRIM (NorthWest Electron and Gamma Ray Interaction in Matter). These simulations revealed that the partition of thermalized electrons between different species (e.g., recombined with self-trapped holes or trapped at thallium sites) vary with the incident energy. Implications for the phenomenon of nonlinearity in scintillator light yield are discussed.« less
Phase-Field Modeling of Sigma-Phase Precipitation in 25Cr7Ni4Mo Duplex Stainless Steel
NASA Astrophysics Data System (ADS)
Malik, Amer; Odqvist, Joakim; Höglund, Lars; Hertzman, Staffan; Ågren, John
2017-10-01
Phase-field modeling is used to simulate the formation of sigma phase in a model alloy mimicking a commercial super duplex stainless steel (SDSS) alloy, in order to study precipitation and growth of sigma phase under linear continuous cooling. The so-called Warren-Boettinger-McFadden (WBM) model is used to build the basis of the multiphase and multicomponent phase-field model. The thermodynamic inconsistency at the multiple junctions associated with the multiphase formulation of the WBM model is resolved by means of a numerical Cut-off algorithm. To make realistic simulations, all the kinetic and the thermodynamic quantities are derived from the CALPHAD databases at each numerical time step, using Thermo-Calc and TQ-Interface. The credibility of the phase-field model is verified by comparing the results from the phase-field simulations with the corresponding DICTRA simulations and also with the empirical data. 2D phase-field simulations are performed for three different cooling rates in two different initial microstructures. A simple model for the nucleation of sigma phase is also implemented in the first case. Simulation results show that the precipitation of sigma phase is characterized by the accumulation of Cr and Mo at the austenite-ferrite and the ferrite-ferrite boundaries. Moreover, it is observed that a slow cooling rate promotes the growth of sigma phase, while a higher cooling rate restricts it, eventually preserving the duplex structure in the SDSS alloy. Results from the phase-field simulations are also compared quantitatively with the experiments, performed on a commercial 2507 SDSS alloy. It is found that overall, the predicted morphological features of the transformation and the composition profiles show good conformity with the empirical data.
Chen, Yunjie; Roux, Benoît
2014-09-21
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
NASA Astrophysics Data System (ADS)
Chen, Yunjie; Roux, Benoît
2014-09-01
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
Kinetic Monte Carlo (kMC) simulation of carbon co-implant on pre-amorphization process.
Park, Soonyeol; Cho, Bumgoo; Yang, Seungsu; Won, Taeyoung
2010-05-01
We report our kinetic Monte Carlo (kMC) study of the effect of carbon co-implant on the pre-amorphization implant (PAL) process. We employed BCA (Binary Collision Approximation) approach for the acquisition of the initial as-implant dopant profile and kMC method for the simulation of diffusion process during the annealing process. The simulation results implied that carbon co-implant suppresses the boron diffusion due to the recombination with interstitials. Also, we could compare the boron diffusion with carbon diffusion by calculating carbon reaction with interstitial. And we can find that boron diffusion is affected from the carbon co-implant energy by enhancing the trapping of interstitial between boron and interstitial.
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
Hagos, Samson M.; Zhang, Chidong; Feng, Zhe; ...
2016-09-19
Influences of the diurnal cycle of convection on the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC) are examined using cloud-permitting regional model simulations and observations. A pair of ensembles of control (CONTROL) and no-diurnal cycle (NODC) simulations of the November 2011 MJO episode are performed. In the CONTROL simulations, the MJO signal is weakened as it propagates across the MC, with much of the convection stalling over the large islands of Sumatra and Borneo. In the NODC simulations, where the incoming shortwave radiation at the top of the atmosphere is maintained at its daily mean value,more » the MJO signal propagating across the MC is enhanced. Examination of the surface energy fluxes in the simulations indicates that in the presence of the diurnal cycle, surface downwelling shortwave radiation in CONTROL simulations is larger because clouds preferentially form in the afternoon. Furthermore, the diurnal co-variability of surface wind speed and skin temperature results in a larger sensible heat flux and a cooler land surface in CONTROL compared to NODC simulations. Here, an analysis of observations indicates that the modulation of the downwelling shortwave radiation at the surface by the diurnal cycle of cloudiness negatively projects on the MJO intraseasonal cycle and therefore disrupts the propagation of the MJO across the MC.« less
NASA Astrophysics Data System (ADS)
Hopperstad, O. S.; Børvik, T.; Berstad, T.; Lademo, O.-G.; Benallal, A.
2007-10-01
The constitutive relation proposed by McCormick (1988 Acta Metall. 36 3061-7) for materials exhibiting negative steady-state strain-rate sensitivity and the Portevin-Le Chatelier (PLC) effect is incorporated into an elastic-viscoplastic model for metals with plastic anisotropy. The constitutive model is implemented in LS-DYNA for corotational shell elements. Plastic anisotropy is taken into account by use of the yield criterion Yld2000/Yld2003 proposed by Barlat et al (2003 J. Plast. 19 1297-319) and Aretz (2004 Modelling Simul. Mater. Sci. Eng. 12 491-509). The parameters of the constitutive equations are determined for a rolled aluminium alloy (AA5083-H116) exhibiting negative steady-state strain-rate sensitivity and serrated yielding. The parameter identification is based on existing experimental data. A numerical investigation is conducted to determine the influence of the PLC effect on the onset of necking in uniaxial and biaxial tension for different overall strain rates. The numerical simulations show that the PLC effect leads to significant reductions in the strain to necking for both uniaxial and biaxial stress states. Increased surface roughness with plastic deformation is predicted for strain rates giving serrated yielding in uniaxial tension. It is likely that this is an important reason for the reduced critical strains. The characteristics of the deformation bands (orientation, width, velocity and strain rate) are also studied.
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.
Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe
2015-08-07
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
The significance of parameter uncertainties for the prediction of offshore pile driving noise.
Lippert, Tristan; von Estorff, Otto
2014-11-01
Due to the construction of offshore wind farms and its potential effect on marine wildlife, the numerical prediction of pile driving noise over long ranges has recently gained importance. In this contribution, a coupled finite element/wavenumber integration model for noise prediction is presented and validated by measurements. The ocean environment, especially the sea bottom, can only be characterized with limited accuracy in terms of input parameters for the numerical model at hand. Therefore the effect of these parameter uncertainties on the prediction of sound pressure levels (SPLs) in the water column is investigated by a probabilistic approach. In fact, a variation of the bottom material parameters by means of Monte-Carlo simulations shows significant effects on the predicted SPLs. A sensitivity analysis of the model with respect to the single quantities is performed, as well as a global variation. Based on the latter, the probability distribution of the SPLs at an exemplary receiver position is evaluated and compared to measurements. The aim of this procedure is to develop a model to reliably predict an interval for the SPLs, by quantifying the degree of uncertainty of the SPLs with the MC simulations.
The Man computer Interactive Data Access System: 25 Years of Interactive Processing.
NASA Astrophysics Data System (ADS)
Lazzara, Matthew A.; Benson, John M.; Fox, Robert J.; Laitsch, Denise J.; Rueden, Joseph P.; Santek, David A.; Wade, Delores M.; Whittaker, Thomas M.; Young, J. T.
1999-02-01
On 12 October 1998, it was the 25th anniversary of the Man computer Interactive Data Access System (McIDAS). On that date in 1973, McIDAS was first used operationally by scientists as a tool for data analysis. Over the last 25 years, McIDAS has undergone numerous architectural changes in an effort to keep pace with changing technology. In its early years, significant technological breakthroughs were required to achieve the functionality needed by atmospheric scientists. Today McIDAS is challenged by new Internet-based approaches to data access and data display. The history and impact of McIDAS, along with some of the lessons learned, are presented here
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Tian, Z
Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less
A Coarse Grained Model for Methylcellulose: Spontaneous Ring Formation at Elevated Temperature
NASA Astrophysics Data System (ADS)
Huang, Wenjun; Larson, Ronald
Methylcellulose (MC) is widely used as food additives and pharma applications, where its thermo-reversible gelation behavior plays an important role. To date the gelation mechanism is not well understood, and therefore attracts great research interest. In this study, we adopted coarse-grained (CG) molecular dynamics simulations to model the MC chains, including the homopolymers and random copolymers that models commercial METHOCEL A, in an implicit water environment, where each MC monomer modeled with a single bead. The simulations are carried using a LAMMPS program. We parameterized our CG model using the radial distribution functions from atomistic simulations of short MC oligomers, extrapolating the results to long chains. We used dissociation free energy to validate our CG model against the atomistic model. The CG model captured the effects of monomer substitution type and temperature from the atomistic simulations. We applied this CG model to simulate single chains up to 1000 monomers long and obtained persistence lengths that are close to those determined from experiment. We observed the chain collapse transition for random copolymer at 600 monomers long at 50C. The chain collapsed into a stable ring structure with outer diameter around 14nm, which appears to be a precursor to the fibril structure observed in the methylcellulose gel observed by Lodge et al. in the recent studies. Our CG model can be extended to other MC derivatives for studying the interaction between these polymers and small molecules, such as hydrophobic drugs.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
NASA Astrophysics Data System (ADS)
Savarin, A.; Chen, S. S.
2017-12-01
The Madden-Julian Oscillation (MJO) is a dominant mode of intraseasonal variability in the tropics. Large-scale convection fueling the MJO is initiated over the tropical Indian Ocean and propagates eastward across the Maritime Continent (MC) and into the western Pacific. Studies have shown a strong diurnal cycle of convection over the islands and coastal seas, with an afternoon precipitation maximum over land and high terrain, and an early morning maximum over water and mountain valley areas. Observational studies have also shown that near 40-50% of MJO events cannot pass through the MC, which is known as the MC barrier effect. As an eastward-propagating MJO convective event passes over the MC, its nature may be altered due to the complex interaction with the large Islands and topography. In turn, the passage of an MJO event modulates local conditions over the MC. The diurnal cycle of convection over the MC and its modulation by the MJO are not well understood and poorly represented in global numerical prediction models. This study aims to improve our understanding of how the diurnal cycle of convection and the presence of islands of the MC affect the eastward propagation of the MJO over the region. We use an atmosphere-ocean coupled model at high resolution (4 km) over the region to to model an MJO event that occurred inNovember-December 2011. We perform three simulations, one with the 'real' islands and topography, one where islands retain their shape but the topography is flattened, and one where all the islands are replaced by water. The differences in precipitation organization and structure can help us understand how topography and presence of islands affect the diurnal cycle of convection and the eastward propagation of the MJO. We hypothesize that removing islands will result in a smoother MJO propagation due to a less strongly forced diurnal cycle of convection and lack of land, while flattening terrain will alter the diurnal cycle of convection and the location of precipitation maxima.
Concepts for dose determination in flat-detector CT
NASA Astrophysics Data System (ADS)
Kyriakou, Yiannis; Deak, Paul; Langner, Oliver; Kalender, Willi A.
2008-07-01
Flat-detector computed tomography (FD-CT) scanners provide large irradiation fields of typically 200 mm in the cranio-caudal direction. In consequence, dose assessment according to the current definition of the computed tomography dose index CTDIL=100 mm, where L is the integration length, would demand larger ionization chambers and phantoms which do not appear practical. We investigated the usefulness of the CTDI concept and practical dosimetry approaches for FD-CT by measurements and Monte Carlo (MC) simulations. An MC simulation tool (ImpactMC, VAMP GmbH, Erlangen, Germany) was used to assess the dose characteristics and was calibrated with measurements of air kerma. For validation purposes measurements were performed on an Axiom Artis C-arm system (Siemens Medical Solutions, Forchheim, Germany) equipped with a flat detector of 40 cm × 30 cm. The dose was assessed for 70 kV and 125 kV in cylindrical PMMA phantoms of 160 mm and 320 mm diameter with a varying phantom length from 150 to 900 mm. MC simulation results were compared to the values obtained with a calibrated ionization chambers of 100 mm and 250 mm length and to thermoluminesence (TLD) dose profiles. The MCs simulations were used to calculate the efficiency of the CTDIL determination with respect to the desired CTDI∞. Both the MC simulation results and the dose distributions obtained by MC simulation were in very good agreement with the CTDI measurements and with the reference TLD profiles, respectively, to within 5%. Standard CTDI phantoms which have a z-extent of 150 mm underestimate the dose at the center by up to 55%, whereas a z-extent of >=600 mm appears to be sufficient for FD-CT; the baseline value of the respective profile was within 1% to the reference baseline. As expected, the measurements with ionization chambers of 100 mm and 250 mm offer a limited accuracy, whereas an increased integration length of >=600 mm appeared to be necessary to approximate CTDI∞ in within 1%. MC simulations appear to offer a practical and accurate way of assessing conversion factors for arbitrary dosimetry setups using a standard pencil chamber to provide estimates of CTDI∞. This would eliminate the need for extra-long phantoms and ionization chambers or excessive amounts of TLDs.
Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Threemore » of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.« less
Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation.
Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong
2018-02-01
Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
The Monte Carlo simulation of the Borexino detector
NASA Astrophysics Data System (ADS)
Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.
2018-01-01
We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.
Solar proton exposure of an ICRU sphere within a complex structure Part I: Combinatorial geometry.
Wilson, John W; Slaba, Tony C; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A
2016-06-01
The 3DHZETRN code, with improved neutron and light ion (Z≤2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency. Published by Elsevier Ltd.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
NASA Astrophysics Data System (ADS)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-07
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.
Lajoie, Guillaume; Krouchev, Nedialko I; Kalaska, John F; Fairhall, Adrienne L; Fetz, Eberhard E
2017-02-01
Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites eventually strengthen. It was also found that effective spike-stimulus delays are consistent with experimentally derived spike-timing-dependent plasticity (STDP) rules, suggesting that STDP is key to drive these changes. However, the impact of STDP at the level of circuits, and the mechanisms governing its modification with neural implants remain poorly understood. The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. Our model successfully reproduces key experimental results, both established and new, and offers mechanistic insights into spike-triggered conditioning. Using analytical calculations and numerical simulations, we derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered conditioning in different regimes of cortical activity.
Lajoie, Guillaume; Kalaska, John F.; Fairhall, Adrienne L.; Fetz, Eberhard E.
2017-01-01
Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites eventually strengthen. It was also found that effective spike-stimulus delays are consistent with experimentally derived spike-timing-dependent plasticity (STDP) rules, suggesting that STDP is key to drive these changes. However, the impact of STDP at the level of circuits, and the mechanisms governing its modification with neural implants remain poorly understood. The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. Our model successfully reproduces key experimental results, both established and new, and offers mechanistic insights into spike-triggered conditioning. Using analytical calculations and numerical simulations, we derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered conditioning in different regimes of cortical activity. PMID:28151957
Motion of particles with inertia in a compressible free shear layer
NASA Technical Reports Server (NTRS)
Samimy, M.; Lele, S. K.
1991-01-01
The effects of the inertia of a particle on its flow-tracking accuracy and particle dispersion are studied using direct numerical simulations of 2D compressible free shear layers in convective Mach number (Mc) range of 0.2 to 0.6. The results show that particle response is well characterized by tau, the ratio of particle response time to the flow time scales (Stokes' number). The slip between particle and fluid imposes a fundamental limit on the accuracy of optical measurements such as LDV and PIV. The error is found to grow like tau up to tau = 1 and taper off at higher tau. For tau = 0.2 the error is about 2 percent. In the flow visualizations based on Mie scattering, particles with tau more than 0.05 are found to grossly misrepresent the flow features. These errors are quantified by calculating the dispersion of particles relative to the fluid. Overall, the effect of compressibility does not seem to be significant on the motion of particles in the range of Mc considered here.
Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul
2014-01-01
As part of an ongoing effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin, analyses and simulations of the hydrology of the Edisto River Basin were made using the topography-based hydrological model (TOPMODEL). A primary focus of the investigation was to assess the potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River Basin. Scaling up was done in a step-wise manner, beginning with applying the calibration parameters, meteorological data, and topographic-wetness-index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made for subsequent simulations, culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River Basin and updated calibration parameters for some of the TOPMODEL calibration parameters. The scaling-up process resulted in nine simulations being made. Simulation 7 best matched the streamflows at station 02175000, Edisto River near Givhans, SC, which was the downstream limit for the TOPMODEL setup, and was obtained by adjusting the scaling factor, including streamflow routing, and using NEXRAD precipitation data for the Edisto River Basin. The Nash-Sutcliffe coefficient of model-fit efficiency and Pearson’s correlation coefficient for simulation 7 were 0.78 and 0.89, respectively. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the McTier Creek and Edisto River models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the substantial difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variable in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD–H, and LOADEST. Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek Basin were used in the water-quality load models.
McStas 1.7 - a new version of the flexible Monte Carlo neutron scattering package
NASA Astrophysics Data System (ADS)
Willendrup, Peter; Farhi, Emmanuel; Lefmann, Kim
2004-07-01
Current neutron instrumentation is both complex and expensive, and accurate simulation has become essential both for building new instruments and for using them effectively. The McStas neutron ray-trace simulation package is a versatile tool for producing such simulations, developed in collaboration between Risø and ILL. The new version (1.7) has many improvements, among these added support for the popular Microsoft Windows platform. This presentation will demonstrate a selection of the new features through a simulation of the ILL IN6 beamline.
NASA Astrophysics Data System (ADS)
Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés
2014-12-01
This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
NASA Astrophysics Data System (ADS)
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.
Assessment and intercomparison of numerical simulations in the Western Mediterranean Sea
NASA Astrophysics Data System (ADS)
Juza, Mélanie; Mourre, Baptiste; Renault, Lionel; Tintoré, Joaquin
2014-05-01
The Balearic Islands Coastal Observing and Forecasting System (SOCIB, www.socib.es) is developing high resolution numerical simulations (hindcasts and forecasts) in the Western Mediterranean Sea (WMOP). WMOP uses a regional configuration of the Regional Ocean Modelling System (ROMS, Shchepetkin and McWilliams, 2005) with a high spatial resolution of 1/50º (1.5-2km). Thus, theses simulations are able to reproduce mesoscale and in some cases sub-mesoscale features that are key in the Mediterranean Sea since they interact and modify the basin and sub-basin circulation. These simulations are initialized from and nested in either the Mediterranean Forecasting System (MFS, 1/16º) or Mercator-Océan simulations (MERCATOR, 1/12º). A repeated glider section in the Ibiza Channel, operated by SOCIB, has revealed significant differences between two WMOP simulations using either MFS or MERCATOR (hereafter WMOP-MFS and WMOP-MERC). In this study, MFS, MERCATOR, WMOP-MFS and WMOP-MERC are compared and evaluated using available multi-platform observations such as satellite products (Sea Level Anomaly, Sea Surface Temperature) and in situ measurements (temperature and salinity profiles from Argo floats, CTD, XBT, fixed moorings and gliders; velocity fields from HF radar and currentmeters). A quantitative comparison is necessary to evaluate the capacity of the simulations to reproduce observed ocean features, and to quantify the possible simulations biases. This will in turn allow to improve the simulations, so as to produce better ocean forecast systems, to study and better understand ocean processes and to address climate studies. Therefore, various statistical diagnostics have been developed to assess and intercompare the simulations at various spatial and temporal scales, in different sub-regions (Alboran Sea, Western and Eastern Algerian sub-basins, Balearic Sea, Gulf of Lion), in different dynamical zones (coastal areas, shelves and "open" sea), along key sections (Ibiza and Mallorca Channels, Corsica Channel, ...) and during specific events.
Long-wavelength Instability in Surface-tension-driven Bénard Convection
NASA Astrophysics Data System (ADS)
van Hook, Stephen J.
1997-03-01
Laboratory experiments and numerical simulations reveal that a liquid layer heated from below and possessing a free upper surface can undergo a long-wavelength deformational instability that causes rupture of the interface.(S. J. VanHook, M. F. Schatz, W. D. McCormick, J. B. Swift, and H. L. Swinney, Phys. Rev. Lett.) 75, 4397 (1995). Depending on the depth and thermal conductivity of the liquid and the overlying gas layer, the interface can rupture downwards and form a dry spot or rupture upwards and form a high spot. This long-wavelength instability competes with the formation of Bénard hexagons for thin or viscous liquid layers, or for liquid layers in microgravity.
STS 51-L crewmembers during training session in flight deck simulation
NASA Technical Reports Server (NTRS)
1985-01-01
Shuttle mission simulator (SMS) scene of Astronauts Michael J. Smith, Ellison S. Onizuka, Judith A. Resnik, and Francis R. (Dick) Scobee in their launch and entry positions on the flight deck (46207); Left to right, Backup payload specialist Barbara R. Morgan, Teacher in Space Payload specialist Christa McAuliffe, Hughes Payload specialist Gregory B. Jarvis, and Mission Specialist Ronald E. McNair in the middeck portion of the Shuttle Mission Simulator at JSC (46208).
A Detailed FLUKA-2005 Monte Carlo Simulation for the ATIC Detector
NASA Technical Reports Server (NTRS)
Gunasingha, R. M.; Fazely, A. R.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Chang, J.; Christl, M.; Ganel, O.; Guzik, T. G.
2006-01-01
We have performed a detailed Monte Carlo (MC) calculation for the Advanced thin Ionization Calorimeter (ATIC) detector using the MC code FLUKA-2005 which is capable of simulating particles up to 10 PeV. The ATIC detector has completed two successful balloon flights from McMurdo, Antarctica lasting a total of more than 35 days. ATIC is designed as a multiple, long duration balloon Bight, investigation of the cosmic ray spectra from below 50 GeV to near 100 TeV total energy; using a fully active Bismuth Germanate @GO) calorimeter. It is equipped with a large mosaic of silicon detector pixels capable of charge identification and as a particle tracking system, three projective layers of x-y scintillator hodoscopes were employed, above, in the middle and below a 0.75 nuclear interaction length graphite target. Our calculations are part of an analysis package of both A- and energy-dependences of different nuclei interacting with the ATIC detector. The MC simulates the responses of different components of the detector such as the Simatrix, the scintillator hodoscopes and the BGO calorimeter to various nuclei. We also show comparisons of the FLUKA-2005 MC calculations with a GEANT calculation and data for protons, He and CNO.
Equilibrium energy spectrum of point vortex motion with remarks on ensemble choice and ergodicity
NASA Astrophysics Data System (ADS)
Esler, J. G.
2017-01-01
The dynamics and statistical mechanics of N chaotically evolving point vortices in the doubly periodic domain are revisited. The selection of the correct microcanonical ensemble for the system is first investigated. The numerical results of Weiss and McWilliams [Phys. Fluids A 3, 835 (1991), 10.1063/1.858014], who argued that the point vortex system with N =6 is nonergodic because of an apparent discrepancy between ensemble averages and dynamical time averages, are shown to be due to an incorrect ensemble definition. When the correct microcanonical ensemble is sampled, accounting for the vortex momentum constraint, time averages obtained from direct numerical simulation agree with ensemble averages within the sampling error of each calculation, i.e., there is no numerical evidence for nonergodicity. Further, in the N →∞ limit it is shown that the vortex momentum no longer constrains the long-time dynamics and therefore that the correct microcanonical ensemble for statistical mechanics is that associated with the entire constant energy hypersurface in phase space. Next, a recently developed technique is used to generate an explicit formula for the density of states function for the system, including for arbitrary distributions of vortex circulations. Exact formulas for the equilibrium energy spectrum, and for the probability density function of the energy in each Fourier mode, are then obtained. Results are compared with a series of direct numerical simulations with N =50 and excellent agreement is found, confirming the relevance of the results for interpretation of quantum and classical two-dimensional turbulence.
NASA Astrophysics Data System (ADS)
Higgins, N.; Lapusta, N.
2014-12-01
Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have lower L). In this heterogeneous rate-and-state fault model, the foreshocks interact with each other and with the overall nucleation process through their postseismic slip. The interplay amongst foreshocks, and between foreshocks and the larger-scale nucleation process, is a topic of our future work.
NASA Astrophysics Data System (ADS)
Ustinov, E. A.
2017-01-01
The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.
Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach
NASA Astrophysics Data System (ADS)
Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne
We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.
NASA Astrophysics Data System (ADS)
Drukker, Karen; Hammes-Schiffer, Sharon
1997-07-01
This paper presents an analytical derivation of a multiconfigurational self-consistent-field (MC-SCF) solution of the time-independent Schrödinger equation for nuclear motion (i.e. vibrational modes). This variational MC-SCF method is designed for the mixed quantum/classical molecular dynamics simulation of multiple proton transfer reactions, where the transferring protons are treated quantum mechanically while the remaining degrees of freedom are treated classically. This paper presents a proof that the Hellmann-Feynman forces on the classical degrees of freedom are identical to the exact forces (i.e. the Pulay corrections vanish) when this MC-SCF method is used with an appropriate choice of basis functions. This new MC-SCF method is applied to multiple proton transfer in a protonated chain of three hydrogen-bonded water molecules. The ground state and the first three excited state energies and the ground state forces agree well with full configuration interaction calculations. Sample trajectories are obtained using adiabatic molecular dynamics methods, and nonadiabatic effects are found to be insignificant for these sample trajectories. The accuracy of the excited states will enable this MC-SCF method to be used in conjunction with nonadiabatic molecular dynamics methods. This application differs from previous work in that it is a real-time quantum dynamical nonequilibrium simulation of multiple proton transfer in a chain of water molecules.
a Model to Simulate the Radiative Transfer of Fluorescence in a Leaf
NASA Astrophysics Data System (ADS)
Zhao, F.; Ni, Q.
2018-04-01
Light is reflected, transmitted and absorbed by green leaves. Chlorophyll fluorescence (ChlF) is the signal emitted by chlorophyll molecules in the leaf after the absorption of light. ChlF can be used as a direct probe of the functional status of photosynthetic machinery because of its close relationship with photosynthesis. The scattering, absorbing, and emitting properties of leaves are spectrally dependent, which can be simulated by modeling leaf-level fluorescence. In this paper, we proposed a Monte-Carlo (MC) model to simulate the radiative transfer of photons in the leaf. Results show that typical leaf fluorescence spectra can be properly simulated, with two peaks centered at around 685 nm in the red and 740 nm in the far-red regions. By analysing the sensitivity of the input parameters, we found the MC model can well simulate their influence on the emitted fluorescence. Meanwhile we compared results simulated by MC model with those by the Fluspect model. Generally they agree well in the far-red region but deviate in the red region.
"First-principles" kinetic Monte Carlo simulations revisited: CO oxidation over RuO2 (110).
Hess, Franziska; Farkas, Attila; Seitsonen, Ari P; Over, Herbert
2012-03-15
First principles-based kinetic Monte Carlo (kMC) simulations are performed for the CO oxidation on RuO(2) (110) under steady-state reaction conditions. The simulations include a set of elementary reaction steps with activation energies taken from three different ab initio density functional theory studies. Critical comparison of the simulation results reveals that already small variations in the activation energies lead to distinctly different reaction scenarios on the surface, even to the point where the dominating elementary reaction step is substituted by another one. For a critical assessment of the chosen energy parameters, it is not sufficient to compare kMC simulations only to experimental turnover frequency (TOF) as a function of the reactant feed ratio. More appropriate benchmarks for kMC simulations are the actual distribution of reactants on the catalyst's surface during steady-state reaction, as determined by in situ infrared spectroscopy and in situ scanning tunneling microscopy, and the temperature dependence of TOF in the from of Arrhenius plots. Copyright © 2012 Wiley Periodicals, Inc.
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A; Zbijewski, W; Stayman, J
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less
2008-01-01
Protein−protein transient and dynamic interactions underlie all biological processes. The molecular dynamics (MD) of the E9 colicin DNase protein, its Im9 inhibitor protein, and their E9-Im9 recognition complex are investigated by combining multiple-copy (MC) MD and accelerated MD (aMD) explicit-solvent simulation approaches, after validation with crystalline-phase and solution experiments. Im9 shows higher flexibility than its E9 counterpart. Im9 displays a significant reduction of backbone flexibility and a remarkable increase in motional correlation upon E9 association. Im9 loops 23−31 and 54−64 open with respect to the E9-Im9 X-ray structure and show high conformational diversity. Upon association a large fraction (∼20 nm2) of E9 and Im9 protein surfaces become inaccessible to water. Numerous salt bridges transiently occurring throughout our six 50 ns long MC-MD simulations are not present in the X-ray model. Among these Im9 Glu31−E9 Arg96 and Im9 Glu41−Lys89 involve interface interactions. Through the use of 10 ns of Im9 aMD simulation, we reconcile the largest thermodynamic impact measured for Asp51Ala mutation with Im9 structure and dynamics. Lys57 acts as an essential molecular switch to shift Im9 surface loop towards an ideal configuration for E9 inhibition. This is achieved by switching Asp60−Lys57 and Asp62−Lys57 hydrogen bonds to Asp51−Lys57 salt bridge. E9-Im9 recognition involves shifts of conformational distributions, reorganization of intramolecular hydrogen bond patterns, and formation of new inter- and intramolecular interactions. The description of key transient biological interactions can be significantly enriched by the dynamic and atomic-level information provided by computer simulations. PMID:19053689
Contrast of Backscattered Electron SEM Images of Nanoparticles on Substrates with Complex Structure
Müller, Erich; Fritsch-Decker, Susanne; Hettler, Simon; Störmer, Heike; Weiss, Carsten; Gerthsen, Dagmar
2017-01-01
This study is concerned with backscattered electron scanning electron microscopy (BSE SEM) contrast of complex nanoscaled samples which consist of SiO2 nanoparticles (NPs) deposited on indium-tin-oxide covered bulk SiO2 and glassy carbon substrates. BSE SEM contrast of NPs is studied as function of the primary electron energy and working distance. Contrast inversions are observed which prevent intuitive interpretation of NP contrast in terms of material contrast. Experimental data is quantitatively compared with Monte-Carlo- (MC-) simulations. Quantitative agreement between experimental data and MC-simulations is obtained if the transmission characteristics of the annular semiconductor detector are taken into account. MC-simulations facilitate the understanding of NP contrast inversions and are helpful to derive conditions for optimum material and topography contrast. PMID:29109816
Contrast of Backscattered Electron SEM Images of Nanoparticles on Substrates with Complex Structure.
Kowoll, Thomas; Müller, Erich; Fritsch-Decker, Susanne; Hettler, Simon; Störmer, Heike; Weiss, Carsten; Gerthsen, Dagmar
2017-01-01
This study is concerned with backscattered electron scanning electron microscopy (BSE SEM) contrast of complex nanoscaled samples which consist of SiO 2 nanoparticles (NPs) deposited on indium-tin-oxide covered bulk SiO 2 and glassy carbon substrates. BSE SEM contrast of NPs is studied as function of the primary electron energy and working distance. Contrast inversions are observed which prevent intuitive interpretation of NP contrast in terms of material contrast. Experimental data is quantitatively compared with Monte-Carlo- (MC-) simulations. Quantitative agreement between experimental data and MC-simulations is obtained if the transmission characteristics of the annular semiconductor detector are taken into account. MC-simulations facilitate the understanding of NP contrast inversions and are helpful to derive conditions for optimum material and topography contrast.
NASA Astrophysics Data System (ADS)
Matsui, T.; Dolan, B.; Tao, W. K.; Rutledge, S. A.; Iguchi, T.; Barnum, J. I.; Lang, S. E.
2017-12-01
This study presents polarimetric radar characteristics of intense convective cores derived from observations as well as a polarimetric-radar simulator from cloud resolving model (CRM) simulations from Midlatitude Continental Convective Clouds Experiment (MC3E) May 23 case over Oklahoma and a Tropical Warm Pool-International Cloud Experiment (TWP-ICE) Jan 23 case over Darwin, Australia to highlight the contrast between continental and maritime convection. The POLArimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a state-of-art T-matrix-Mueller-Matrix-based polarimetric radar simulator that can generate synthetic polarimetric radar signals (reflectivity, differential reflectivity, specific differential phase, co-polar correlation) as well as synthetic radar retrievals (precipitation, hydrometeor type, updraft velocity) through the consistent treatment of cloud microphysics and dynamics from CRMs. The Weather Research and Forecasting (WRF) model is configured to simulate continental and maritime severe storms over the MC3E and TWP-ICE domains with the Goddard bulk 4ICE single-moment microphysics and HUCM spectra-bin microphysics. Various statistical diagrams of polarimetric radar signals, hydrometeor types, updraft velocity, and precipitation intensity are investigated for convective and stratiform precipitation regimes and directly compared between MC3E and TWP-ICE cases. The result shows MC3E convection is characterized with very strong reflectivity (up to 60dBZ), slight negative differential reflectivity (-0.8 0 dB) and near-zero specific differential phase above the freezing levels. On the other hand, TWP-ICE convection shows strong reflectivity (up to 50dBZ), slight positive differential reflectivity (0 1.0 dB) and differential phase (0 0.8 dB/km). Hydrometeor IDentification (HID) algorithm from the observation and simulations detect hail-dominant convection core in MC3E, while graupel-dominant convection core in TWP-ICE. This land-ocean contrast agrees with the previous studies using the radar and radiometer signals from TRMM satellite climatology associated with warm-cloud depths and vertical structure of buoyancy.
NASA Astrophysics Data System (ADS)
Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-08-01
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.
Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-07-17
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.
Monte Carlo simulations to replace film dosimetry in IMRT verification.
Goetzfried, Thomas; Rickhey, Mark; Treutwein, Marius; Koelbl, Oliver; Bogner, Ludwig
2011-01-01
Patient-specific verification of intensity-modulated radiation therapy (IMRT) plans can be done by dosimetric measurements or by independent dose or monitor unit calculations. The aim of this study was the clinical evaluation of IMRT verification based on a fast Monte Carlo (MC) program with regard to possible benefits compared to commonly used film dosimetry. 25 head-and-neck IMRT plans were recalculated by a pencil beam based treatment planning system (TPS) using an appropriate quality assurance (QA) phantom. All plans were verified both by film and diode dosimetry and compared to MC simulations. The irradiated films, the results of diode measurements and the computed dose distributions were evaluated, and the data were compared on the basis of gamma maps and dose-difference histograms. Average deviations in the high-dose region between diode measurements and point dose calculations performed with the TPS and MC program were 0.7 ± 2.7% and 1.2 ± 3.1%, respectively. For film measurements, the mean gamma values with 3% dose difference and 3mm distance-to-agreement were 0.74 ± 0.28 (TPS as reference) with dose deviations up to 10%. Corresponding values were significantly reduced to 0.34 ± 0.09 for MC dose calculation. The total time needed for both verification procedures is comparable, however, by far less labor intensive in the case of MC simulations. The presented study showed that independent dose calculation verification of IMRT plans with a fast MC program has the potential to eclipse film dosimetry more and more in the near future. Thus, the linac-specific QA part will necessarily become more important. In combination with MC simulations and due to the simple set-up, point-dose measurements for dosimetric plausibility checks are recommended at least in the IMRT introduction phase. Copyright © 2010. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Sud, Y. C.; Walker, G. K.
1999-09-01
A prognostic cloud scheme named McRAS (Microphysics of Clouds with Relaxed Arakawa-Schubert Scheme) has been designed and developed with the aim of improving moist processes, microphysics of clouds, and cloud-radiation interactions in GCMs. McRAS distinguishes three types of clouds: convective, stratiform, and boundary layer. The convective clouds transform and merge into stratiform clouds on an hourly timescale, while the boundary layer clouds merge into the stratiform clouds instantly. The cloud condensate converts into precipitation following the autoconversion equations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, as well as diffuse both horizontally and vertically with a fully interactive cloud microphysics throughout the life cycle of the cloud, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry.An evaluation of McRAS in a single-column model (SCM) with the Global Atmospheric Research Program Atlantic Tropical Experiment (GATE) Phase III data has shown that, together with the rest of the model physics, McRAS can simulate the observed temperature, humidity, and precipitation without discernible systematic errors. The time history and time mean in-cloud water and ice distribution, fractional cloudiness, cloud optical thickness, origin of precipitation in the convective anvils and towers, and the convective updraft and downdraft velocities and mass fluxes all simulate a realistic behavior. Some of these diagnostics are not verifiable with data on hand. These SCM sensitivity tests show that (i) without clouds the simulated GATE-SCM atmosphere is cooler than observed; (ii) the model's convective scheme, RAS, is an important subparameterization of McRAS; and (iii) advection of cloud water substance is helpful in simulating better cloud distribution and cloud-radiation interaction. An evaluation of the performance of McRAS in the Goddard Earth Observing System II GCM is given in a companion paper (Part II).
Thermo-mechanically coupled subduction with a free surface using ASPECT
NASA Astrophysics Data System (ADS)
Fraters, Menno; Glerum, Anne; Thieulot, Cedric; Spakman, Wim
2014-05-01
ASPECT (Kronbichler et al., 2012), short for Advanced Solver for Problems in Earth's ConvecTion, is a new Finite Element code which was originally designed for thermally driven (mantle) convection and is built on state of the art numerical methods (adaptive mesh refinement, linear and nonlinear solver, stabilization of transport dominated processes and a high scalability on multiple processors). Here we present an application of ASPECT to modeling of fully thermo-mechanically coupled subduction. Our subduction model contains three different compositions: a crustal composition on top of both the subducting slab and the overriding plate, a mantle composition and a sticky air composition, which allows for simulating a free surface for modeling topography build-up. We implemented a visco-plastic rheology using frictional plasticity and a composite viscosity defined by diffusion and dislocation creep. The lithospheric mantle has the same composition as the mantle but has a higher viscosity because of a lower temperature. The temperature field is implemented in ASPECT as follows: a linear temperature gradient for the lithosphere and an adiabatic geotherm for the sublithospheric mantle. Initial slab temperature is defined using the analytical solution of McKenzie (1970). The plates can be pushed from the sides of the model, and it is possible to define an additional independent mantle in/out flow through the boundaries. We will show a preliminary set of models, highlighting the codes capabilities, such as the Adaptive Mesh Refinement, topography development and the influence of mantle flow on the subduction evolution. Kronbichler, M., Heister, T., and Bangerth, W. (2012), High accuracy mantle convection simulation through modern numerical methods, Geophysical Journal International,191, 12-29, doi:10.1111/j.1365-246X.2012.05609. McKenzie, D.P. (1970), Temperature and potential temperature beneath island arcs, Teconophysics, 10, 357-366, doi:10.1016/0040-1951(70)90115-0.
A classical density functional theory for the asymmetric restricted primitive model of ionic liquids
NASA Astrophysics Data System (ADS)
Lu, Hongduo; Nordholm, Sture; Woodward, Clifford E.; Forsman, Jan
2018-05-01
A new three-parameter (valency, ion size, and charge asymmetry) model, the asymmetric restricted primitive model (ARPM) of ionic liquids, has recently been proposed. Given that ionic liquids generally are composed of monovalent species, the ARPM effectively reduces to a two-parameter model. Monte Carlo (MC) simulations have demonstrated that the ARPM is able to reproduce key properties of room temperature ionic liquids (RTILs) in bulk and at charged surfaces. The relatively modest complexity of the model raises the possibility, which is explored here, that a classical density functional theory (DFT) could resolve its properties. This is relevant because it might generate great improvements in terms of both numerical efficiency and understanding in the continued research of RTILs and their applications. In this report, a DFT for rod-like molecules is proposed as an approximate theoretical tool for an ARPM fluid. Borrowing data on the ion pair fraction from a single bulk simulation, the ARPM is modelled as a mixture of dissociated ions and connected ion pairs. We have specifically studied an ARPM where the hard-sphere diameter is 5 Å, with the charge located 1 Å from the hard-sphere centre. We focus on fluid structure and electrochemical behaviour of this ARPM fluid, into which a model electrode is immersed. The latter is modelled as a perfect conductor, and surface polarization is handled by the method of image charges. Approximate methods, which were developed in an earlier study, to take image interactions into account, are also incorporated in the DFT. We make direct numerical comparisons between DFT predictions and corresponding simulation data. The DFT theory is implemented both in the normal mean field form with respect to the electrostatic interactions and in a correlated form based on hole formation by both steric repulsions and ion-ion Coulomb interactions. The results clearly show that ion-ion correlations play a very important role in the screening of the charged surfaces by our ARPM ionic liquid. We have studied electrostatic potentials and ion density profiles as well the differential capacitance. The mean-field DFT fails to reproduce these properties, but the inclusion of ion-ion correlation by a simple approximate treatment yields quite reasonable agreement with the corresponding simulation results. An interesting finding is that there appears to be a surface phase transition at relatively low surface charge which is readily explored by DFT, but seen also in the MC simulations at somewhat higher asymmetry.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur
2012-01-01
One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF
Magnus: A New Resistive MHD Code with Heat Flow Terms
NASA Astrophysics Data System (ADS)
Navarro, Anamaría; Lora-Clavijo, F. D.; González, Guillermo A.
2017-07-01
We present a new magnetohydrodynamic (MHD) code for the simulation of wave propagation in the solar atmosphere, under the effects of electrical resistivity—but not dominant—and heat transference in a uniform 3D grid. The code is based on the finite-volume method combined with the HLLE and HLLC approximate Riemann solvers, which use different slope limiters like MINMOD, MC, and WENO5. In order to control the growth of the divergence of the magnetic field, due to numerical errors, we apply the Flux Constrained Transport method, which is described in detail to understand how the resistive terms are included in the algorithm. In our results, it is verified that this method preserves the divergence of the magnetic fields within the machine round-off error (˜ 1× {10}-12). For the validation of the accuracy and efficiency of the schemes implemented in the code, we present some numerical tests in 1D and 2D for the ideal MHD. Later, we show one test for the resistivity in a magnetic reconnection process and one for the thermal conduction, where the temperature is advected by the magnetic field lines. Moreover, we display two numerical problems associated with the MHD wave propagation. The first one corresponds to a 3D evolution of a vertical velocity pulse at the photosphere-transition-corona region, while the second one consists of a 2D simulation of a transverse velocity pulse in a coronal loop.
Astronaut William S. McArthur in training for contingency EVA in WETF
NASA Technical Reports Server (NTRS)
1993-01-01
Astronaut William S. McArthur, mission specialist, participates in training for contingency extravehicular activity (EVA) for the STS-58 mission. He is wearing the extravehicular mobility unit (EMU) minus his helmet. For simulation purposes, McArthur was about to be submerged to a point of neutral buoyancy in the JSC Weightless Environment Training Facility (WETF).
Poster — Thur Eve — 47: Monte Carlo Simulation of Scp, Sc and Sp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.
The in-water output ratio (Scp), in-air output ratio (Sc), and phantom scattering factor (Sp) are important parameters for radiotherapy dose calculation. Experimentally, Scp is obtained by measuring the dose rate ratio in water phantom, and Sc the water Kerma rate ratio in air. There is no method that allows direct measurement of Sp. Monte Carlo (MC) method has been used to simulate Scp and Sc in literatures, similar to experimental setup, but no MC direct simulation of Sp available yet to the best of our knowledge. We propose in this report a method of performing direct MC simulation of Sp.more » Starting from the definition, we derived that Sp of a clinical photon beam can be approximated by the ratio of the dose rates contributed from the primary beam for a given field size to the reference field size. Since only the primary beam is used, any Linac head scattering should be excluded from the simulation, which can be realized by using the incident electron as a scoring parameter for MU. We performed MC simulations for Scp, Sc and Sp. Scp matches well with golden beam data. Sp obtained by the proposed method agrees well with what is obtained using the traditional method, Sp=Scp/Sc. Since the smaller the field size, the more the primary beam dominates, our Sp simulation method is accurate for small field. By analyzing the calculated data, we found that this method can be used with no problem for large fields. The difference it introduced is clinically insignificant.« less
A numerical study of the effect of urbanization on the climate of Las Vegas metropolitan area
NASA Astrophysics Data System (ADS)
Kamal, S. M.; Huang, H. P.; Myint, S. W.
2014-12-01
Las Vegas is one of the fastest growing desert cities. Its developed area has doubled in the last 30 years. An accurate prediction of the effect of urbanization on the climate of the city is crucial for resource management and planning. In this study, we use the Weather Research and Forecasting (WRF) model coupled with a land surface and urban canopy model to investigate the effects of urbanization on the regional climate pattern around Las Vegas. High resolution numerical simulations are performed with a 3 km resolution over the metropolitan area. With identical lateral boundary conditions, three land-use land-cover maps, representing 2006, 1992 and hypothetical 1900, are used in multiple simulations. The differences in the simulated climate among those cases are used to quantify the urban effect. The simulated surface air temperature is validated against observational data from the weather station at the McCarran airport. It is found that urbanization affects substantial warming during the night but a minor cooling during the day. Detailed diagnostics of the surface energy budget are performed to help interpret this result. In addition, the emerging urban structures are found to have a mechanical effect of slowing down the climatological wind field over the urban area. The change in wind, in turn, leads to a secondary modification of the temperature structure within the air shed of the city. This finding suggests the need to combine the mechanical and thermodynamic effects to construct a complete picture of the influence of land cover on urban climate. In all cases of the simulations, it is also demonstrated that urbanization influences surface air temperature mainly within the metropolitan area.
NASA Astrophysics Data System (ADS)
Dinniman, Michael S.; Klinck, John M.; Smith, Walker O.
2007-11-01
Satellite imagery shows that there was substantial variability in the sea ice extent in the Ross Sea during 2001-2003. Much of this variability is thought to be due to several large icebergs that moved through the area during that period. The effects of these changes in sea ice on circulation and water mass distributions are investigated with a numerical general circulation model. It would be difficult to simulate the highly variable sea ice from 2001 to 2003 with a dynamic sea ice model since much of the variability was due to the floating icebergs. Here, sea ice concentration is specified from satellite observations. To examine the effects of changes in sea ice due to iceberg C-19, simulations were performed using either climatological ice concentrations or the observed ice for that period. The heat balance around the Ross Sea Polynya (RSP) shows that the dominant term in the surface heat budget is the net exchange with the atmosphere, but advection of oceanic warm water is also important. The area average annual basal melt rate beneath the Ross Ice Shelf is reduced by 12% in the observed sea ice simulation. The observed sea ice simulation also creates more High-Salinity Shelf Water. Another simulation was performed with observed sea ice and a fixed iceberg representing B-15A. There is reduced advection of warm surface water during summer from the RSP into McMurdo Sound due to B-15A, but a much stronger reduction is due to the late opening of the RSP in early 2003 because of C-19.
Structure of Cometary Dust Particles
NASA Astrophysics Data System (ADS)
Levasseur-Regourd, A. C.; Hadamcik, E.; Lasue, J.
2004-11-01
The recent encounter of Stardust with comet 81P/Wild 2 has provided highly spatially resolved data about dust particles in the coma. They show intense swarms and bursts of particles, suggest the existence of fragmenting low-density particles formed of higher density sub-micrometer components [1], and definitely confirm previous results (inferred from Giotto encounter with comet Grigg-Skjellerup [2] and remote light scattering observations [3]). The light scattering properties (mostly polarization, which does not depend upon disputable normalizations) of dust in cometary comae will be summarized, with emphasis on the spatial changes and on the wavelength and phase angle dependence. Experimental and numerical simulations are needed to translate these observed light scattering properties in terms of physical properties of the dust particles (e.g. size, morphology, albedo, porosity). New experimental simulations (with fluffy mixtures of sub-micron sized silica and carbon grains) and new numerical simulations (with fractal aggregates of homogeneous or core-mantled silicate and organic grains) will be presented. The results are in favor of highly porous particles built up (by ballistic-cluster-cluster agglomeration) from grains of interstellar origin. The perspectives offered by laboratory simulations with aggregates built under conditions representative of the early solar system on board the International Space Station will be presented, together with the perspectives offered by future experiments on board the Rosetta cometary probe. Supports from CNES and ESA are acknowledged [1] Tuzzolino et al., Science, 304, 1776, 2004, [2] N. McBride et al., Mon. Not. R. Astron. Soc., 289, p. 535-553, 1997, [3] Levasseur-Regourd and Hadamcik, J. Quant. Spectros. Radiat. Transfer, 79-80, 903-910, 2003.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
Oxidation of a new Biogenic VOC: Chamber Studies of the Atmospheric Chemistry of Methyl Chavicol
NASA Astrophysics Data System (ADS)
Bloss, William; Alam, Mohammed; Adbul Raheem, Modinah; Rickard, Andrew; Hamilton, Jacqui; Pereira, Kelly; Camredon, Marie; Munoz, Amalia; Vazquez, Monica; Vera, Teresa; Rodenas, Mila
2013-04-01
The oxidation of volatile organic compounds (VOCs) leads to formation of ozone and SOA, with consequences for air quality, health, crop yields, atmospheric chemistry and radiative transfer. Recent observations have identified Methyl Chavicol ("MC": Estragole; 1-allyl-4-methoxybenzene, C10H12O) as a major BVOC above pine forests in the USA, and oil palm plantations in Malaysian Borneo. Palm oil cultivation, and hence MC emissions, may be expected to increase with societal food and bio fuel demand. We present the results of a series of simulation chamber experiments to assess the atmospheric fate of MC. Experiments were performed in the EUPHORE facility, monitoring stable product species, radical intermediates, and aerosol production and composition. We determine rate constants for reaction of MC with OH and O3, and ozonolysis radical yields. Stable product measurements (FTIR, PTRMS, GC-SPME) are used to determine the yields of stable products formed from OH- and O3- initiated oxidation, and to develop an understanding of the initial stages of the MC degradation chemistry. A surrogate mechanism approach is used to simulate MC degradation within the MCM, evaluated in terms of ozone production measured in the chamber experiments, and applied to quantify the role of MC in the real atmosphere.
Comparative Study of Three High Order Schemes for LES of Temporally Evolving Mixing Layers
NASA Technical Reports Server (NTRS)
Yee, Helen M. C.; Sjogreen, Biorn Axel; Hadjadj, C.
2012-01-01
Three high order shock-capturing schemes are compared for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7) and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.
Using McStas for modelling complex optics, using simple building bricks
NASA Astrophysics Data System (ADS)
Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim
2011-04-01
The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.
A Collection of Nonlinear Aircraft Simulations in MATLAB
NASA Technical Reports Server (NTRS)
Garza, Frederico R.; Morelli, Eugene A.
2003-01-01
Nonlinear six degree-of-freedom simulations for a variety of aircraft were created using MATLAB. Data for aircraft geometry, aerodynamic characteristics, mass / inertia properties, and engine characteristics were obtained from open literature publications documenting wind tunnel experiments and flight tests. Each nonlinear simulation was implemented within a common framework in MATLAB, and includes an interface with another commercially-available program to read pilot inputs and produce a three-dimensional (3-D) display of the simulated airplane motion. Aircraft simulations include the General Dynamics F-16 Fighting Falcon, Convair F-106B Delta Dart, Grumman F-14 Tomcat, McDonnell Douglas F-4 Phantom, NASA Langley Free-Flying Aircraft for Sub-scale Experimental Research (FASER), NASA HL-20 Lifting Body, NASA / DARPA X-31 Enhanced Fighter Maneuverability Demonstrator, and the Vought A-7 Corsair II. All nonlinear simulations and 3-D displays run in real time in response to pilot inputs, using contemporary desktop personal computer hardware. The simulations can also be run in batch mode. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. Since all the nonlinear simulations are implemented entirely in MATLAB, user-defined control laws can be added in a straightforward fashion, and the simulations are portable across various computing platforms. Routines for trim, linearization, and numerical integration are included. The general nonlinear simulation framework and the specifics for each particular aircraft are documented.
NASA Astrophysics Data System (ADS)
Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.
2015-12-01
Numerical modeling of disposal of nuclear waste in a deep geologic repository in fractured crystalline rock requires robust characterization of fractures. Various methods for fracture representation in granitic rocks exist. In this study we used the fracture continuum model (FCM) to characterize fractured rock for use in the simulation of flow and transport in the far field of a generic nuclear waste repository located at 500 m depth. The FCM approach is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The method generates permeability fields using field observations of fracture sets. The original method described in McKenna and Reeves (2005) was designed for vertical fractures. The method has since then been extended to incorporate fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation (Kalinina et al. 20012, 2014). For this study the numerical code PFLOTRAN (Lichtner et al., 2015) has been used to model flow and transport. PFLOTRAN solves a system of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Benchmark tests were conducted to simulate flow and transport in a specified model domain. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the FCM method was used to generate a permeability field of the fractured rock. The PFLOTRAN code was then used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains.
A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.
2005-01-01
This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.
Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems
Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...
2017-11-07
We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less
Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk
We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less
Game of Life on the Equal Degree Random Lattice
NASA Astrophysics Data System (ADS)
Shao, Zhi-Gang; Chen, Tao
2010-12-01
An effective matrix method is performed to build the equal degree random (EDR) lattice, and then a cellular automaton game of life on the EDR lattice is studied by Monte Carlo (MC) simulation. The standard mean field approximation (MFA) is applied, and then the density of live cells is given ρ=0.37017 by MFA, which is consistent with the result ρ=0.37±0.003 by MC simulation.
Improved two-point model for limiter scrape-off layer
NASA Astrophysics Data System (ADS)
Tokar, M. Z.; Kobayashi, M.; Feng, Y.
2004-10-01
An analytical model for a limiter scrape-off layer (SOL) is proposed, which takes self-consistently into account both conductive and convective contributions to the heat transport in SOL. The particle flows in the SOL main part are determined by considering the recycling of neutrals. The model allows us to interpret the results of numerical simulation by the code EMC3-EIRENE [Y. Feng, F. Sardei, P. Grigull, K. McCormick, J. Kisslinger, D. Reiter, and Y. Igitkhanov, Plasma Phys. Controlled Fusion 44, 611 (2002)] for the edge region of Tokamak Experiment for Technology Oriented Research (TEXTOR) [Proceedings of the 16th IEEE Symposium on Fusion Engineering, 1995 (Institute for Electrical and Electronics Engineers, Piscataway, NJ, 1995), p. 470].
Yeo, Sang Chul; Lo, Yu Chieh; Li, Ju; Lee, Hyuck Mo
2014-10-07
Ammonia (NH3) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (Eb) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (Eb) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH3 nitridation rate on the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH3 nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH3 nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH3 nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.
2010-12-01
Arctic has been observed in the northern Canadian Arctic Archipelago ( Bourke and McLaren 1992). There, thick multiyear ice of Arctic origin encounters...Affairs, 87(2), 63-77. 172 Bourke , R. H., and A. S. McLaren, 1992: Contour mapping of Arctic Basin ice draft and roughness parameters. J. Geophys
1988-12-01
disseminated silt and shale partings. The unit is medium-bedded and is characterized by numerous fossils of the brachiopod Resseralla fertilis . The... runoff from the Base is collected by the storm sewer system and discharged to McCrory Creek (Figure 6). The confluence of McCrory Creek and the Stone
Blind Predictions of DNA and RNA Tweezers Experiments with Force and Torque
Chou, Fang-Chieh; Lipfert, Jan; Das, Rhiju
2014-01-01
Single-molecule tweezers measurements of double-stranded nucleic acids (dsDNA and dsRNA) provide unprecedented opportunities to dissect how these fundamental molecules respond to forces and torques analogous to those applied by topoisomerases, viral capsids, and other biological partners. However, tweezers data are still most commonly interpreted post facto in the framework of simple analytical models. Testing falsifiable predictions of state-of-the-art nucleic acid models would be more illuminating but has not been performed. Here we describe a blind challenge in which numerical predictions of nucleic acid mechanical properties were compared to experimental data obtained recently for dsRNA under applied force and torque. The predictions were enabled by the HelixMC package, first presented in this paper. HelixMC advances crystallography-derived base-pair level models (BPLMs) to simulate kilobase-length dsDNAs and dsRNAs under external forces and torques, including their global linking numbers. These calculations recovered the experimental bending persistence length of dsRNA within the error of the simulations and accurately predicted that dsRNA's “spring-like” conformation would give a two-fold decrease of stretch modulus relative to dsDNA. Further blind predictions of helix torsional properties, however, exposed inaccuracies in current BPLM theory, including three-fold discrepancies in torsional persistence length at the high force limit and the incorrect sign of dsRNA link-extension (twist-stretch) coupling. Beyond these experiments, HelixMC predicted that ‘nucleosome-excluding’ poly(A)/poly(T) is at least two-fold stiffer than random-sequence dsDNA in bending, stretching, and torsional behaviors; Z-DNA to be at least three-fold stiffer than random-sequence dsDNA, with a near-zero link-extension coupling; and non-negligible effects from base pair step correlations. We propose that experimentally testing these predictions should be powerful next steps for understanding the flexibility of dsDNA and dsRNA in sequence contexts and under mechanical stresses relevant to their biology. PMID:25102226
NASA Astrophysics Data System (ADS)
Tierz, Pablo; Sandri, Laura; Ramona Stefanescu, Elena; Patra, Abani; Marzocchi, Warner; Costa, Antonio; Sulpizio, Roberto
2014-05-01
Explosive volcanoes and, especially, Pyroclastic Density Currents (PDCs) pose an enormous threat to populations living in the surroundings of volcanic areas. Difficulties in the modeling of PDCs are related to (i) very complex and stochastic physical processes, intrinsic to their occurrence, and (ii) to a lack of knowledge about how these processes actually form and evolve. This means that there are deep uncertainties (namely, of aleatory nature due to point (i) above, and of epistemic nature due to point (ii) above) associated to the study and forecast of PDCs. Consequently, the assessment of their hazard is better described in terms of probabilistic approaches rather than by deterministic ones. What is actually done to assess probabilistic hazard from PDCs is to couple deterministic simulators with statistical techniques that can, eventually, supply probabilities and inform about the uncertainties involved. In this work, some examples of both PDC numerical simulators (Energy Cone and TITAN2D) and uncertainty quantification techniques (Monte Carlo sampling -MC-, Polynomial Chaos Quadrature -PCQ- and Bayesian Linear Emulation -BLE-) are presented, and their advantages, limitations and future potential are underlined. The key point in choosing a specific method leans on the balance between its related computational cost, the physical reliability of the simulator and the pursued target of the hazard analysis (type of PDCs considered, time-scale selected for the analysis, particular guidelines received from decision-making agencies, etc.). Although current numerical and statistical techniques have brought important advances in probabilistic volcanic hazard assessment from PDCs, some of them may be further applicable to more sophisticated simulators. In addition, forthcoming improvements could be focused on three main multidisciplinary directions: 1) Validate the simulators frequently used (through comparison with PDC deposits and other simulators), 2) Decrease simulator runtimes (whether by increasing the knowledge about the physical processes or by doing more efficient programming, parallelization, ...) and 3) Improve uncertainty quantification techniques.
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
Absolute dose calculations for Monte Carlo simulations of radiotherapy beams.
Popescu, I A; Shaw, C P; Zavgorodni, S F; Beckham, W A
2005-07-21
Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 x 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 x 10(13) +/- 1.0% electrons incident on the target and a total dose of 20.87 cGy +/- 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.
Dosimetry applications in GATE Monte Carlo toolkit.
Papadimitroulas, Panagiotis
2017-09-01
Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Underwood, T. S. A.; Sung, W.; McFadden, C. H.; McMahon, S. J.; Hall, D. C.; McNamara, A. L.; Paganetti, H.; Sawakuchi, G. O.; Schuemann, J.
2017-04-01
Whilst Monte Carlo (MC) simulations of proton energy deposition have been well-validated at the macroscopic level, their microscopic validation remains lacking. Equally, no gold-standard yet exists for experimental metrology of individual proton tracks. In this work we compare the distributions of stochastic proton interactions simulated using the TOPAS-nBio MC platform against confocal microscope data for Al2O3:C,Mg fluorescent nuclear track detectors (FNTDs). We irradiated 8× 4× 0.5 mm3 FNTD chips inside a water phantom, positioned at seven positions along a pristine proton Bragg peak with a range in water of 12 cm. MC simulations were implemented in two stages: (1) using TOPAS to model the beam properties within a water phantom and (2) using TOPAS-nBio with Geant4-DNA physics to score particle interactions through a water surrogate of Al2O3:C,Mg. The measured median track integrated brightness (IB) was observed to be strongly correlated to both (i) voxelized track-averaged linear energy transfer (LET) and (ii) frequency mean microdosimetric lineal energy, \\overline{{{y}F}} , both simulated in pure water. Histograms of FNTD track IB were compared against TOPAS-nBio histograms of the number of terminal electrons per proton, scored in water with mass-density scaled to mimic Al2O3:C,Mg. Trends between exposure depths observed in TOPAS-nBio simulations were experimentally replicated in the study of FNTD track IB. Our results represent an important first step towards the experimental validation of MC simulations on the sub-cellular scale and suggest that FNTDs can enable experimental study of the microdosimetric properties of individual proton tracks.
Underwood, T S A; Sung, W; McFadden, C H; McMahon, S J; Hall, D C; McNamara, A L; Paganetti, H; Sawakuchi, G O; Schuemann, J
2017-04-21
Whilst Monte Carlo (MC) simulations of proton energy deposition have been well-validated at the macroscopic level, their microscopic validation remains lacking. Equally, no gold-standard yet exists for experimental metrology of individual proton tracks. In this work we compare the distributions of stochastic proton interactions simulated using the TOPAS-nBio MC platform against confocal microscope data for Al 2 O 3 :C,Mg fluorescent nuclear track detectors (FNTDs). We irradiated [Formula: see text] mm 3 FNTD chips inside a water phantom, positioned at seven positions along a pristine proton Bragg peak with a range in water of 12 cm. MC simulations were implemented in two stages: (1) using TOPAS to model the beam properties within a water phantom and (2) using TOPAS-nBio with Geant4-DNA physics to score particle interactions through a water surrogate of Al 2 O 3 :C,Mg. The measured median track integrated brightness (IB) was observed to be strongly correlated to both (i) voxelized track-averaged linear energy transfer (LET) and (ii) frequency mean microdosimetric lineal energy, [Formula: see text], both simulated in pure water. Histograms of FNTD track IB were compared against TOPAS-nBio histograms of the number of terminal electrons per proton, scored in water with mass-density scaled to mimic Al 2 O 3 :C,Mg. Trends between exposure depths observed in TOPAS-nBio simulations were experimentally replicated in the study of FNTD track IB. Our results represent an important first step towards the experimental validation of MC simulations on the sub-cellular scale and suggest that FNTDs can enable experimental study of the microdosimetric properties of individual proton tracks.
A Comparison of Experimental EPMA Data and Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2004-01-01
Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.
3D quantitative photoacoustic image reconstruction using Monte Carlo method and linearization
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Hirasawa, Takeshi; Tsujita, Kazuhiro; Kushibiki, Toshihiro; Ishihara, Miya
2018-02-01
To quantify the functional and structural information of peripheral blood vessels for diagnoses of diseases which affects peripheral blood vessels such as diabetes and peripheral vascular disease, a 3D quantitative photoacoustic tomography (QPAT) reconstructing the optical properties such as the absorption coefficient reflecting microvascular structures and hemoglobin concentration and oxygenation saturation is studied. QPAT image reconstruction algorithms based on radiative transfer equation (RTE) and photon diffusion equation (PDE) have been proposed. However, it is not easy to use RTE in the clinical practice because of the huge computational load and long calculation time. On the other hand, it is always considered problematic to use PDE, because it does not approximate RTE well near the illuminating position. In this study, we developed the 3D QPAT image reconstruction using Monte Carlo (MC) method which approximates RTE better than PDE to reconstruct the optical properties in the region near the illuminating surface. To reduce the calculation time, we applied linearization. The QPAT image reconstruction algorithm with MC method and linearization was examined in numerical simulations and phantom experiment by use of a scanning system with a single probe consisting of P(VDF-TrFE) piezo electric film and optical fiber.
Measuring Virtual Simulations Value in Training Exercises - USMC Use Case
2015-12-04
and cost avoidance and Capt Jonathan Richardson, PM TRASYS, who was the primary author for the After-Action Documentation and Analysis Report ...REFERENCES Cermak J. & McGurk M. (2010, July). Putting a Value On Training. McKinsey Quarterly. Retrieved June 10, 2015 from http://www.mckinsey.com...www.hqmc.marines.mil/Portals/142/Docs/2015CPG_Color.pdf Gordon, S. & Cooley, T. (2013) Phase One Final Report : Cost Avoidance Study of USMC Simulation
NASA Astrophysics Data System (ADS)
Lépinoux, J.; Sigli, C.
2018-01-01
In a recent paper, the authors showed how the clusters free energies are constrained by the coagulation probability, and explained various anomalies observed during the precipitation kinetics in concentrated alloys. This coagulation probability appeared to be a too complex function to be accurately predicted knowing only the cluster distribution in Cluster Dynamics (CD). Using atomistic Monte Carlo (MC) simulations, it is shown that during a transformation at constant temperature, after a short transient regime, the transformation occurs at quasi-equilibrium. It is proposed to use MC simulations until the system quasi-equilibrates then to switch to CD which is mean field but not limited by a box size like MC. In this paper, we explain how to take into account the information available before the quasi-equilibrium state to establish guidelines to safely predict the cluster free energies.
NASA Astrophysics Data System (ADS)
Hu, R.; Liu, Q.
2016-12-01
For civil engineering projects, especially in the subsurface with groundwater, the artificial ground freezing (AGF) method has been widely used. Commonly, a refrigerant is circulated through a pre-buried pipe network to form a freezing wall to support the construction. In many cases, the temperature change is merely considered as a result of simple heat conduction. However, the influence of the water-ice phase change on the flow properties should not be neglected, if large amount of groundwater with high flow velocities is present. In this work, we perform a 2D modelling (software: Comsol Multiphysics) of an AFG project of a metro tunnel in Southern China, taking groundwater flow into account. The model is validated based on in-situ measurement of groundwater flow and temperature. We choose a cross section of this horizontal AGF project and set up a model with horizontal groundwater flow normal to the axial of the tunnel. The Darcy velocity is a coupling variable and related to the temperature field. During the phase change of the pore water and the decrement of permeability in freezing zone, we introduce a variable of effective hydraulic conductivity which is described by a function of temperature change. The energy conservation problem is solved by apparent heat capacity method and the related parameter change is described by a step function (McKenzie, et. al. 2007). The results of temperature contour maps combined with groundwater flow velocity at different times indicate that the freezing wall appears in an asymmetrical shape along the groundwater flow direction. It forms slowly and on the upstream side the thickness of the freezing wall is thinner than that on the downstream side. The closure time of the freezing wall increases at the middle of the both up and downstream sides. The average thickness of the freezing wall on the upstream side is mostly affected by the groundwater flow velocity. With the successful validation of this model, this numerical simulation could provide guidance in this AGF project in the future. ReferenceJeffrey M. McKenzie, et. al. Groundwater flow with energy transport and water-ice phase change: Numerical simulations, benchmarks, and application to freezing in peat bogs. Advances in Water Resources 30 966-983 (2007).
NASA Astrophysics Data System (ADS)
Katsoulakis, Markos A.; Vlachos, Dionisios G.
2003-11-01
We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.
Monte Carlo decision curve analysis using aggregate data.
Hozo, Iztok; Tsalatsanis, Athanasios; Djulbegovic, Benjamin
2017-02-01
Decision curve analysis (DCA) is an increasingly used method for evaluating diagnostic tests and predictive models, but its application requires individual patient data. The Monte Carlo (MC) method can be used to simulate probabilities and outcomes of individual patients and offers an attractive option for application of DCA. We constructed a MC decision model to simulate individual probabilities of outcomes of interest. These probabilities were contrasted against the threshold probability at which a decision-maker is indifferent between key management strategies: treat all, treat none or use predictive model to guide treatment. We compared the results of DCA with MC simulated data against the results of DCA based on actual individual patient data for three decision models published in the literature: (i) statins for primary prevention of cardiovascular disease, (ii) hospice referral for terminally ill patients and (iii) prostate cancer surgery. The results of MC DCA and patient data DCA were identical. To the extent that patient data DCA were used to inform decisions about statin use, referral to hospice or prostate surgery, the results indicate that MC DCA could have also been used. As long as the aggregate parameters on distribution of the probability of outcomes and treatment effects are accurately described in the published reports, the MC DCA will generate indistinguishable results from individual patient data DCA. We provide a simple, easy-to-use model, which can facilitate wider use of DCA and better evaluation of diagnostic tests and predictive models that rely only on aggregate data reported in the literature. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.
Quantum Monte Carlo Simulation of condensed van der Waals Systems
NASA Astrophysics Data System (ADS)
Benali, Anouar; Shulenburger, Luke; Romero, Nichols A.; Kim, Jeongnim; Anatole von Lilienfeld, O.
2012-02-01
Van der Waals forces are as ubiquitous as infamous. While post-Hartree-Fock methods enable accurate estimates of these forces in molecules and clusters, they remain elusive for dealing with many-electron condensed phase systems. We present Quantum Monte Carlo [1,2] results for condensed van der Waals systems. Interatomic many-body contributions to cohesive energies and bulk modulus will be discussed. Numerical evidence is presented for crystals of rare gas atoms, and compared to experiments and methods [3]. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.[4pt] [1] J. Kim, K. Esler, J. McMinis and D. Ceperley, SciDAC 2010, J. of Physics: Conference series, Chattanooga, Tennessee, July 11 2011 [0pt] [2] QMCPACK simulation suite, http://qmcpack.cmscc.org (unpublished)[0pt] [3] O. A. von Lillienfeld and A. Tkatchenko, J. Chem. Phys. 132 234109 (2010)
NASA Astrophysics Data System (ADS)
De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico
2012-02-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Bai, T
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less
Bhaskaran, Abhishek; Barry, M A Tony; Al Raisi, Sara I; Chik, William; Nguyen, Doan Trang; Pouliopoulos, Jim; Nalliah, Chrishan; Hendricks, Roger; Thomas, Stuart; McEwan, Alistair L; Kovoor, Pramesh; Thiagalingam, Aravinda
2015-10-01
Magnetic navigation system (MNS) ablation was suspected to be less effective and unstable in highly mobile cardiac regions compared to radiofrequency (RF) ablations with manual control (MC). The aim of the study was to compare the (1) lesion size and (2) stability of MNS versus MC during irrigated RF ablation with and without simulated mechanical heart wall motion. In a previously validated myocardial phantom, the performance of Navistar RMT Thermocool catheter (Biosense Webster, CA, USA) guided with MNS was compared to manually controlled Navistar irrigated Thermocool catheter (Biosense Webster, CA, USA). The lesion dimensions were compared with the catheter in inferior and superior orientation, with and without 6-mm simulated wall motion. All ablations were performed with 40 W power and 30 ml/ min irrigation for 60 s. A total of 60 ablations were performed. The mean lesion volumes with MNS and MC were 57.5 ± 7.1 and 58.1 ± 7.1 mm(3), respectively, in the inferior catheter orientation (n = 23, p = 0.6), 62.8 ± 9.9 and 64.6 ± 7.6 mm(3), respectively, in the superior catheter orientation (n = 16, p = 0.9). With 6-mm simulated wall motion, the mean lesion volumes with MNS and MC were 60.2 ± 2.7 and 42.8 ± 8.4 mm(3), respectively, in the inferior catheter orientation (n = 11, p = <0.01*), 74.1 ± 5.8 and 54.2 ± 3.7 mm(3), respectively, in the superior catheter orientation (n = 10, p = <0.01*). During 6-mm simulated wall motion, the MC catheter and MNS catheter moved 5.2 ± 0.1 and 0 mm, respectively, in inferior orientation and 5.5 ± 0.1 and 0 mm, respectively, in the superior orientation on the ablation surface. The lesion dimensions were larger with MNS compared to MC in the presence of simulated wall motion, consistent with greater catheter stability. However, similar lesion dimensions were observed in the stationary model.
Wan Chan Tseung, H; Ma, J; Beltran, C
2015-06-01
Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically applicable MC-based IMPT treatment planning system. The detailed nuclear modeling will allow us to perform very fast linear energy transfer and neutron dose estimates on the GPU.
Age and tectonic setting of the Mesozoic McCoy Mountains Formation in western Arizona, USA
Spencer, J.E.; Richard, S.M.; Gehrels, G.E.; Gleason, J.D.; Dickinson, W.R.
2011-01-01
The McCoy Mountains Formation consists of Upper Jurassic to Upper Cretaceous siltstone, sandstone, and conglomerate exposed in an east-west-trending belt in southwestern Arizona and southeastern California. At least three different tectonic settings have been proposed for McCoy deposition, and multiple tectonic settings are likely over the ~80 m.y. age range of deposition. U-Pb isotopic analysis of 396 zircon sand grains from at or near the top of McCoy sections in the southern Little Harquahala, Granite Wash, New Water, and southern Plomosa Mountains, all in western Arizona, identifi ed only Jurassic or older zircons. A basaltic lava fl ow near the top of the section in the New Water Mountains yielded a U-Pb zircon date of 154.4 ?? 2.1 Ma. Geochemically similar lava fl ows and sills in the Granite Wash and southern Plomosa Mountains are inferred to be approximately the same age. We interpret these new analyses to indicate that Mesozoic clastic strata in these areas are Upper Jurassic and are broadly correlative with the lowermost McCoy Mountains Formation in the Dome Rock, McCoy, and Palen Mountains farther west. Six samples of numerous Upper Jurassic basaltic sills and lava fl ows in the McCoy Mountains Formation in the Granite Wash, New Water, and southern Plomosa Mountains yielded initial ??Nd values (at t = 150 Ma) of between +4 and +6. The geochemistry and geochronology of this igneous suite, and detrital-zircon geochronology of the sandstones, support the interpretation that the lower McCoy Mountains Formation was deposited during rifting within the western extension of the Sabinas-Chihuahua-Bisbee rift belt. Abundant 190-240 Ma zircon sand grains were derived from nearby, unidentifi ed Triassic magmatic-arc rocks in areas that were unaffected by younger Jurassic magmatism. A sandstone from the upper McCoy Mountains Formation in the Dome Rock Mountains (Arizona) yielded numerous 80-108 Ma zircon grains and almost no 190-240 Ma grains, revealing a major reorganization in sediment-dispersal pathways and/or modifi cation of source rocks that had occurred by ca. 80 Ma. ?? 2011 Geological Society of America.
A novel Monte Carlo algorithm for simulating crystals with McStas
NASA Astrophysics Data System (ADS)
Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.
2004-07-01
We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
SU-G-JeP2-15: Proton Beam Behavior in the Presence of Realistic Magnet Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, D M; Wachowicz, K; Fallone, B G
2016-06-15
Purpose: To investigate the effects of magnetic fields on proton therapy beams for integration with MRI. Methods: 3D magnetic fields from an open-bore superconducting MRI model (previously developed by our group) and 3D magnetic fields from an in-house gradient coil design were applied to various mono energetic proton pencil beam (80MeV to 250MeV) simulations. In all simulations, the z-axis of the simulation geometry coincided with the direction of the B0 field and magnet isocentre. In each simulation, the initial beam trajectory was varied. The first set of simulations performed was based on analytic magnetic force equations (analytic simulations), which couldmore » be rapidly calculated yet were limited to propagating proton beams in vacuum. The second set is full Monte Carlo (MC) simulations, which used GEANT4 MC toolkit. Metrics such as the beam position and dose profiles were extracted. Comparisons between the cases with and without magnetic fields present were made. Results: The analytic simulations served as verification checks for the MC simulations when the same simulation geometries were used. The results of the analytic simulations agreed with the MC simulations performed in vacuum. The presence of the MRI’s static magnetic field causes proton pencil beams to follow a slight helical trajectory when there were some initial off-axis components. The 80MeV, 150MeV, and 250MeV proton beams rotated by 4.9o, 3.6o, and 2.8o, respectively, when they reached z=0cm. The deflections caused by gradient coils’ magnetic fields show spatially invariant patterns with a maximum range of 0.5mm at z=0cm. Conclusion: This investigation reveals that both the MRI’s B0 and gradient magnetic fields can cause small but observable deflections of proton beams at energies studied. The MRI’s static field caused a rotation of the beam while the gradient coils’ fields effects were spatially invariant. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less
NASA Astrophysics Data System (ADS)
Yang, Zhangcan; Lively, Michael A.; Allain, Jean Paul
2015-02-01
The production of self-organized nanostructures by ion beam sputtering has been of keen interest to researchers for many decades. Despite numerous experimental and theoretical efforts to understand ion-induced nanostructures, there are still many basic questions open to discussion, such as the role of erosion or curvature-dependent sputtering. In this work, a hybrid MD/kMC (molecular dynamics/kinetic Monte Carlo) multiscale atomistic model is developed to investigate these knowledge gaps, and its predictive ability is validated across the experimental parameter space. This model uses crater functions, which were obtained from MD simulations, to model the prompt mass redistribution due to single-ion impacts. Defect migration, which is missing from previous models that use crater functions, is treated by a kMC Arrhenius method. Using this model, a systematic study was performed for silicon bombarded by Ar+ ions of various energies (100 eV, 250 eV, 500 eV, 700 eV, and 1000 eV) at incidence angles of 0∘ to 80∘. The simulation results were compared with experimental findings, showing good agreement in many aspects of surface evolution, such as the phase diagram. The underestimation of the ripple wavelength by the simulations suggests that surface diffusion is not the main smoothening mechanism for ion-induced pattern formation. Furthermore, the simulated results were compared with moment-description continuum theory and found to give better results, as the simulation did not suffer from the same mathematical inconsistencies as the continuum model. The key finding was that redistributive effects are dominant in the formation of flat surfaces and parallel-mode ripples, but erosive effects are dominant at high angles when perpendicular-mode ripples are formed. Ion irradiation with simultaneous sample rotation was also simulated, resulting in arrays of square-ordered dots. The patterns obtained from sample rotation were strongly correlated to the rotation speed and to the pattern types formed without sample rotation, and a critical value of about 5 rpm was found between disordered ripples and square-ordered dots. Finally, simulations of dual-beam sputtering were performed, with the resulting patterns determined by the flux ratio of the two beams and the pattern types resulting from single-beam sputtering under the same conditions.
Radiation Measurements in Simulated Ablation Layers
2010-12-06
J.Spacecraft & Rockets, V35, No 6, 1998, pp 729-735. D‟Souza MG, Eichmann TN, Mudford NR, Potter DF, Morgan RG, McIntyre TJ, Jacobs PA (2009...gases. D. Phil Thesis. Oxford University 1976 Potter, D., Eichmann , T., Brandis, A., Morgan, R., Jacobs, P., McIntyre, T., “Simulation of radiating...Heatshield Material. 46th AIAA Aerospace Sciences Meeting and Exhibit, AIAA2008-1202, Reno, USA. D‟Souza, M.G., Eichmann , T.N., Mudford, N.R., Potter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeo, Sang Chul; Lee, Hyuck Mo, E-mail: hmlee@kaist.ac.kr; Lo, Yu Chieh
2014-10-07
Ammonia (NH{sub 3}) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (E{sub b}) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (E{sub b}) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH{sub 3} nitridation rate onmore » the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH{sub 3} nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH{sub 3} nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH{sub 3} nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.« less
The Good, the Bad, and the Ugly: Numerical Prediction for Hurricane Juan (2003)
NASA Astrophysics Data System (ADS)
Gyakum, J.; McTaggart-Cowan, R.
2004-05-01
The range of accuracy of the numerical weather prediction (NWP) guidance for the landfall of Hurricane Juan (2003), from nearly perfect to nearly useless, motivates a study of the NWP forecast errors on 28-29 September 2003 in the eastern North Atlantic. Although the forecasts issued over the period were of very high quality, this is primarily because of the diligence of the forecasters, and not related to the reliability of the numerical predictions provided to them by the North American operational centers and the research community. A bifurcation in the forecast fields from various centers and institutes occurred beginning with the 0000 UTC run of 28 September, and continuing until landfall just after 0000 UTC on 29 September. The GFS (NCEP), Eta (NCEP), GEM (Canadian Meteorological Centre; CMC), and MC2 (McGill) forecast models all showed an extremely weak (minimum SLP above 1000 hPa) remnant vortex moving north-northwestward into the Gulf of Maine and merging with a diabatically-developed surface low offshore. The GFS uses a vortex-relocation scheme, the Eta a vortex bogus, and the GEM and MC2 are run on CMC analyses that contain no enhanced vortex. The UK Met Office operational, the GFDL, and the NOGAPS (US Navy) forecast models all ran a small-scale hurricane-like vortex directly into Nova Scotia and verified very well for this case. The UKMO model uses synthetic observations to enhance structures in poorly-forecasted areas during the analysis cycle and both the GFDL and NOGAPS model use advanced idealized vortex bogusing in their initial conditions. The quality of the McGill MC2 forecast is found to be significantly enhanced using a bogusing technique similar to that used in the initialization of the successful forecast models. A verification of the improved forecast is presented along with a discussion of the need for operational quality control of the background fields in the analysis cycle and for proper representation of strong, small-scale tropical vortices.
NASA Astrophysics Data System (ADS)
Bellos, Vasilis; Tsakiris, George
2016-09-01
The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.
New simulation model of multicomponent crystal growth and inhibition.
Wathen, Brent; Kuiper, Michael; Walker, Virginia; Jia, Zongchao
2004-04-02
We review a novel computational model for the study of crystal structures both on their own and in conjunction with inhibitor molecules. The model advances existing Monte Carlo (MC) simulation techniques by extending them from modeling 3D crystal surface patches to modeling entire 3D crystals, and by including the use of "complex" multicomponent molecules within the simulations. These advances makes it possible to incorporate the 3D shape and non-uniform surface properties of inhibitors into simulations, and to study what effect these inhibitor properties have on the growth of whole crystals containing up to tens of millions of molecules. The application of this extended MC model to the study of antifreeze proteins (AFPs) and their effects on ice formation is reported, including the success of the technique in achieving AFP-induced ice-growth inhibition with concurrent changes to ice morphology that mimic experimental results. Simulations of ice-growth inhibition suggest that the degree of inhibition afforded by an AFP is a function of its ice-binding position relative to the underlying anisotropic growth pattern of ice. This extended MC technique is applicable to other crystal and crystal-inhibitor systems, including more complex crystal systems such as clathrates.
An adaptive bias - hybrid MD/kMC algorithm for protein folding and aggregation.
Peter, Emanuel K; Shea, Joan-Emma
2017-07-05
In this paper, we present a novel hybrid Molecular Dynamics/kinetic Monte Carlo (MD/kMC) algorithm and apply it to protein folding and aggregation in explicit solvent. The new algorithm uses a dynamical definition of biases throughout the MD component of the simulation, normalized in relation to the unbiased forces. The algorithm guarantees sampling of the underlying ensemble in dependency of one average linear coupling factor 〈α〉 τ . We test the validity of the kinetics in simulations of dialanine and compare dihedral transition kinetics with long-time MD-simulations. We find that for low 〈α〉 τ values, kinetics are in good quantitative agreement. In folding simulations of TrpCage and TrpZip4 in explicit solvent, we also find good quantitative agreement with experimental results and prior MD/kMC simulations. Finally, we apply our algorithm to study growth of the Alzheimer Amyloid Aβ 16-22 fibril by monomer addition. We observe two possible binding modes, one at the extremity of the fibril (elongation) and one on the surface of the fibril (lateral growth), on timescales ranging from ns to 8 μs.
A new method for shape and texture classification of orthopedic wear nanoparticles.
Zhang, Dongning; Page, Janet R; Kavanaugh, Aaron E; Billi, Fabrizio
2012-09-27
Detailed morphologic analysis of particles produced during wear of orthopedic implants is important in determining a correlation among material, wear, and biological effects. However, the use of simple shape descriptors is insufficient to categorize the data and to compare the nature of wear particles generated by different implants. An approach based on Discrete Fourier Transform (DFT) is presented for describing particle shape and surface texture. Four metal-on-metal bearing couples were tested in an orbital wear simulator under standard and adverse (steep-angled cups) wear simulator conditions. Digitized Scanning Electron Microscope (SEM) images of the wear particles were imported into MATLAB to carry out Fourier descriptor calculations via a specifically developed algorithm. The descriptors were then used for studying particle characteristics (shape and texture) as well as for cluster classification. Analysis of the particles demonstrated the validity of the proposed model by showing that steep-angle Co-Cr wear particles were more asymmetric, compressed, extended, triangular, square, and roughened at 3 Mc than after 0.25 Mc. In contrast, particles from standard angle samples were only more compressed and extended after 3 Mc compared to 0.25 Mc. Cluster analysis revealed that the 0.25 Mc steep-angle particle distribution was a subset of the 3 Mc distribution.
Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc
2015-01-01
To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Full-orbit and backward Monte Carlo simulation of runaway electrons
NASA Astrophysics Data System (ADS)
Del-Castillo-Negrete, Diego
2017-10-01
High-energy relativistic runaway electrons (RE) can be produced during magnetic disruptions due to electric fields generated during the thermal and current quench of the plasma. Understanding this problem is key for the safe operation of ITER because, if not avoided or mitigated, RE can severely damage the plasma facing components. In this presentation we report on RE simulation efforts centered in two complementary approaches: (i) Full orbit (6-D phase space) relativistic numerical simulations in general (integrable or chaotic) 3-D magnetic and electric fields, including radiation damping and collisions, using the recently developed particle-based Kinetic Orbit Runaway electron Code (KORC) and (ii) Backward Monte-Carlo (MC) simulations based on a recently developed efficient backward stochastic differential equations (BSDE) solver. Following a description of the corresponding numerical methods, we present applications to: (i) RE synchrotron radiation (SR) emission using KORC and (ii) Computation of time-dependent runaway probability distributions, RE production rates, and expected slowing-down and runaway times using BSDE. We study the dependence of these statistical observables on the electric and magnetic field, and the ion effective charge. SR is a key energy dissipation mechanism in the high-energy regime, and it is also extensively used as an experimental diagnostic of RE. Using KORC we study full orbit effects on SR and discuss a recently developed SR synthetic diagnostic that incorporates the full angular dependence of SR, and the location and basic optics of the camera. It is shown that oversimplifying the angular dependence of SR and/or ignoring orbit effects can significantly modify the shape and overestimate the amplitude of the spectra. Applications to DIII-D RE experiments are discussed.
NASA Astrophysics Data System (ADS)
Tecklenburg, Jan; Neuweiler, Insa; Dentz, Marco; Carrera, Jesus; Geiger, Sebastian
2013-04-01
Flow processes in geotechnical applications do often take place in highly heterogeneous porous media, such as fractured rock. Since, in this type of media, classical modelling approaches are problematic, flow and transport is often modelled using multi-continua approaches. From such approaches, multirate mass transfer models (mrmt) can be derived to describe the flow and transport in the "fast" or mobile zone of the medium. The porous media is then modeled with one mobile zone and multiple immobile zones, where the immobile zones are connected to the mobile zone by single rate mass transfer. We proceed from a mrmt model for immiscible displacement of two fluids, where the Buckley-Leverett equation is expanded by a sink-source-term which is nonlocal in time. This sink-source-term models exchange with an immobile zone with mass transfer driven by capillary diffusion. This nonlinear diffusive mass transfer can be approximated for particular imbibition or drainage cases by a linear process. We present a numerical scheme for this model together with simulation results for a single fracture test case. We solve the mrmt model with the finite volume method and explicit time integration. The sink-source-term is transformed to multiple single rate mass transfer processes, as shown by Carrera et. al. (1998), to make it local in time. With numerical simulations we studied immiscible displacement in a single fracture test case. To do this we calculated the flow parameters using information about the geometry and the integral solution for two phase flow by McWorther and Sunnada (1990). Comparision to the results of the full two dimensional two phase flow model by Flemisch et. al. (2011) show good similarities of the saturation breakthrough curves. Carrera, J., Sanchez-Vila, X., Benet, I., Medina, A., Galarza, G., and Guimera, J.: On matrix diffusion: formulations, solution methods and qualitative effects, Hydrogeology Journal, 6, 178-190, 1998. Flemisch, B., Darcis, M., Erbertseder, K., Faigle, B., Lauser, A. et al.: Dumux: Dune for multi-{Phase, Component, Scale, Physics, ...} flow and transport in porous media, Advances in Water Resources, 34, 1102-1112, 2011. McWhorter, D. B., and Sunada, D. K.: Exact integral solutions for two-phase flow, Water Resources Research, 26(3), 399-413, 1990.
Maser emission from planetary and stellar magnetospheres
NASA Astrophysics Data System (ADS)
Speirs, David
2012-07-01
A variety of astrophysical radio emissions have been identified to date in association with non-uniform magnetic fields and charged particle streams. From terrestrial auroral kilometric radiation (AKR) to observations of auroral radio emission from the flare star UV Ceti and CU Virginis, there are numerous examples of this intense, highly polarised magnetospheric radio signature [1][2]. Characterised by discrete spectral components at ~300kHz in the terrestrial auroral case, the radiation is clearly non-thermal and there is a strong belief that such emissions are generated by an electron cyclotron maser instability [1]. Previous work has focussed on a loss cone generation mechanism and cavity ducting model for radiation beaming, however recent theory and simulations suggest an alternative model comprising emission driven by an electron horseshoe distribution [1]. Such distributions are formed when particles descend into the increasing magnetic field of planetary / stellar auroral magnetospheres, where conservation of the magnetic moment results in conversion of axial momentum into rotational momentum. Theory has demonstrated that such distributions are highly unstable to cyclotron emission in the X-mode [3], and that these emissions when propagating tangential to the plasma cavity boundary may refract upwards due to plasma density inhomogeneity [4]. Scaled experiments have been conducted at the University of Strathclyde to study the emission process under controlled laboratory conditions [5]. In addition, numerical models have simulated the emission mechanism in the presence of a background plasma and in the absence of radiation boundaries [6]. Here we present the results of beam-plasma simulations that confirm the radiation model for tangential growth and upward refraction [4] and agree with recent Jodrell Bank observations of pulsed, narrowly beamed radio emission from the oblique rotator star CU Virginis [2]. [1] R. Bingham and R. A. Cairns, Phys. Plasmas, 7, 3089 (2000). [2] B.J. Kellett, V. Graffagnino, R. Bingham et al., ArXiv Astrophysics, 0701214 (2007). [3] R.A. Cairns, I. Vorgul, R. Bingham et al., Phys. Plasmas 18, 022902 (2011). [4] J.D. Menietti, R.L. Mutel, I.W. Christopher et al., J. Geophys. Res., 116, A12219 (2011). [5] S.L. McConville, M.E. Koepke, K.M. Gillespie et al., Plasma Phys. Control. Fusion, 53, 124020 (2011). [6] D.C. Speirs, K. Ronald, S.L. McConville, Phys. Plasmas, 17, 056501 (2010).
NASA Astrophysics Data System (ADS)
Limbu, Dil; Biswas, Parthapratim
We present a simple and efficient Monte-Carlo (MC) simulation of Iron (Fe) and Nickel (Ni) clusters with N =5-100 and amorphous Silicon (a-Si) starting from a random configuration. Using Sutton-Chen and Finnis-Sinclair potentials for Ni (in fcc lattice) and Fe (in bcc lattice), and Stillinger-Weber potential for a-Si, respectively, the total energy of the system is optimized by employing MC moves that include both the stochastic nature of MC simulations and the gradient of the potential function. For both iron and nickel clusters, the energy of the configurations is found to be very close to the values listed in the Cambridge Cluster Database, whereas the maximum force on each cluster is found to be much lower than the corresponding value obtained from the optimized structural configurations reported in the database. An extension of the method to model the amorphous state of Si is presented and the results are compared with experimental data and those obtained from other simulation methods. The work is partially supported by the NSF under Grant Number DMR 1507166.
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
NASA Astrophysics Data System (ADS)
Yonezawa, Yasushige; Shimoyama, Hiromitsu; Nakamura, Haruki
2011-01-01
Multicanonical molecular-dynamics (McMD) simulation and Metadynamics (MetaD) are useful for obtaining the free-energies, and can be mutually complementary. We combined McMD with MetaD, and applied it to the conformational free energy calculations of a proline dipeptide. First, MetaD was performed along the dihedral angle at the prolyl bond and we obtained a coarse biasing potential. After adding the biasing potential to the dihedral angle potential energy, we conducted McMD with the modified potential energy. Enhanced sampling was achieved for all degrees-of-freedom, and the sampling of the dihedral angle space was facilitated. After reweighting, we obtained an accurate free energy landscape.
Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object
NASA Astrophysics Data System (ADS)
Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jeanpierre; Crespin, Sylvain
2011-03-01
Last year, we demonstrated the feasibility of a new method to perform dosimetric quality control of Treatment Planning Systems in radiotherapy, this method is based on Monte-Carlo simulations and uses anatomical Digital Test Objects (DTOs). The pelvic DTO was used in order to assess this new method on an ECLIPSE VARIAN Treatment Planning System. Large dose variations were observed particularly in air and bone equivalent material. In this current work, we discuss the results of the previous paper and provide an explanation for observed dose differences, the VARIAN Eclipse (Anisotropic Analytical) algorithm was investigated. Monte Carlo simulations (MC) were performed with a PENELOPE code version 2003. To increase efficiency of MC simulations, we have used our parallelized version based on the standard MPI (Message Passing Interface). The parallel code has been run on a 32- processor SGI cluster. The study was carried out using pelvic DTOs and was performed for low- and high-energy photon beams (6 and 18MV) on 2100CD VARIAN linear accelerator. A square field (10x10 cm2) was used. Assuming the MC data as reference, χ index analyze was carried out. For this study, a distance to agreement (DTA) was set to 7mm while the dose difference was set to 5% as recommended in the TRS-430 and TG-53 (on the beam axis in 3-D inhomogeneities). When using Monte Carlo PENELOPE, the absorbed dose is computed to the medium, however the TPS computes dose to water. We have used the method described by Siebers et al. based on Bragg-Gray cavity theory to convert MC simulated dose to medium to dose to water. Results show a strong consistency between ECLIPSE and MC calculations on the beam axis.
Absolute dose calculations for Monte Carlo simulations of radiotherapy beams
NASA Astrophysics Data System (ADS)
Popescu, I. A.; Shaw, C. P.; Zavgorodni, S. F.; Beckham, W. A.
2005-07-01
Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 × 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 × 1013 ± 1.0% electrons incident on the target and a total dose of 20.87 cGy ± 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebe, J; Department of Physics and Astronomy, University of Calgary, Calgary, AB; Ploquin, N
2014-08-15
Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4more » (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.« less
An energy function for dynamics simulations of polypeptides in torsion angle space
NASA Astrophysics Data System (ADS)
Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.
1998-05-01
Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond
2016-04-15
Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hueso-Gonzalez, F; Vijande, J; Ballester, F
Purpose: Tissue heterogeneities and calcifications have significant impact on the dosimetry of low energy brachytherapy (BT). RayStretch is an analytical algorithm developed in our institution to incorporate heterogeneity corrections in LDR prostate brachytherapy. The aim of this work is to study its application in clinical cases by comparing its predictions with the results obtained with TG-43 and Monte Carlo (MC) simulations. Methods: A clinical implant (71 I-125 seeds, 15 needles) from a real patient was considered. On this patient, different volumes with calcifications were considered. Its properties were evaluated in three ways by i) the Treatment planning system (TPS) (TG-43),more » ii) a MC study using the Penelope2009 code, and iii) RayStretch. To analyse the performance of RayStretch, calcifications located in the prostate lobules covering 11% of the total prostate volume and larger calcifications located in the lobules and underneath the urethra for a total occupied volume of 30% were considered. Three mass densities (1.05, 1.20, and 1.35 g/cm3) were explored for the calcifications. Therefore, 6 different scenarios ranging from small low density calcifications to large high density ones have been discussed. Results: DVH and D90 results given by RayStretch agree within 1% with the full MC simulations. Although no effort has been done to improve RayStretch numerical performance, its present implementation is able to evaluate a clinical implant in a few seconds to the same level of accuracy as a detailed MC calculation. Conclusion: RayStretch is a robust method for heterogeneity corrections in prostate BT supported on TG-43 data. Its compatibility with commercial TPSs and its high calculation speed makes it feasible for use in clinical settings for improving treatment quality. It will allow in a second phase of this project, its use during intraoperative ultrasound planning. This study was partly supported by a fellowship grant from the Spanish Ministry of Education, by the Generalitat Valenciana under Project PROMETEOII/2013/010, by the Spanish Government under Project No. FIS2013-42156 and by the European Commission within the SeventhFramework Program through ENTERVISION (grant agreement number 264552).« less
Campbell, Bruce G.; Landmeyer, James E.
2014-01-01
Chesterfield County is located in the northeastern part of South Carolina along the southern border of North Carolina and is primarily underlain by unconsolidated sediments of Late Cretaceous age and younger of the Atlantic Coastal Plain. Approximately 20 percent of Chesterfield County is in the Piedmont Physiographic Province, and this area of the county is not included in this study. These Atlantic Coastal Plain sediments compose two productive aquifers: the Crouch Branch aquifer that is present at land surface across most of the county and the deeper, semi-confined McQueen Branch aquifer. Most of the potable water supplied to residents of Chesterfield County is produced from the Crouch Branch and McQueen Branch aquifers by a well field located near McBee, South Carolina, in the southwestern part of the county. Overall, groundwater availability is good to very good in most of Chesterfield County, especially the area around and to the south of McBee, South Carolina. The eastern part of Chesterfield County does not have as abundant groundwater resources but resources are generally adequate for domestic purposes. The primary purpose of this study was to determine groundwater-flow rates, flow directions, and changes in water budgets over time for the Crouch Branch and McQueen Branch aquifers in the Chesterfield County area. This goal was accomplished by using the U.S. Geological Survey finite-difference MODFLOW groundwater-flow code to construct and calibrate a groundwater-flow model of the Atlantic Coastal Plain of Chesterfield County. The model was created with a uniform grid size of 300 by 300 feet to facilitate a more accurate simulation of groundwater-surface-water interactions. The model consists of 617 rows from north to south extending about 35 miles and 884 columns from west to east extending about 50 miles, yielding a total area of about 1,750 square miles. However, the active part of the modeled area, or the part where groundwater flow is simulated, totaled about 1,117 square miles. Major types of data used as input to the model included groundwater levels, groundwater-use data, and hydrostratigraphic data, along with estimates and measurements of stream base flows made specifically for this study. The groundwater-flow model was calibrated to groundwater-level and stream base-flow conditions from 1900 to 2012 using 39 stress periods. The model was calibrated with an automated parameter-estimation approach using the computer program PEST, and the model used regularized inversion and pilot points. The groundwater-flow model was calibrated using field data that included groundwater levels that had been collected between 1940 and 2012 from 239 wells and base-flow measurements from 44 locations distributed within the study area. To better understand recharge and inter-aquifer interactions, seven wells were equipped with continuous groundwater-level recording equipment during the course of the study, between 2008 and 2012. These water levels were included in the model calibration process. The observed groundwater levels were compared to the simulated ones, and acceptable calibration fits were achieved. Root mean square error for the simulated groundwater levels compared to all observed groundwater levels was 9.3 feet for the Crouch Branch aquifer and 8.6 feet for the McQueen Branch aquifer. The calibrated groundwater-flow model was then used to calculate groundwater budgets for the entire study area and for two sub-areas. The sub-areas are the Alligator Rural Water and Sewer Company well field near McBee, South Carolina, and the Carolina Sandhills National Wildlife Refuge acquisition boundary area. For the overall model area, recharge rates vary from 56 to 1,679 million gallons per day (Mgal/d) with a mean of 737 Mgal/d over the simulation period (1900–2012). The simulated water budget for the streams and rivers varies from 653 to 1,127 Mgal/d with a mean of 944 Mgal/d. The simulated “storage-in term” ranges from 0 to 565 Mgal/d with a mean of 276 Mgal/d. The simulated “storage-out term” has a range of 0 to 552 Mgal/d with a mean of 77 Mgal/d. Groundwater budgets for the McBee, South Carolina, area and the Carolina Sandhills National Wildlife Refuge acquisition area had similar results. An analysis of the effects of past and current groundwater withdrawals on base flows in the McBee area indicated a negligible effect of pumping from the Alligator Rural Water and Sewer well field on local stream base flows. Simulate base flows for 2012 for selected streams in and around the McBee area were similar with and without simulated groundwater withdrawals from the well field. Removing all pumping from the model for the entire simulation period (1900–2012) produces a negligible difference in increased base flow for the selected streams. The 2012 flow for Lower Alligator Creek was 5.04 Mgal/d with the wells pumping and 5.08 Mgal/d without the wells pumping; this represents the largest difference in simulated flows for the six streams.
Estimation of finite mixtures using the empirical characteristic function
NASA Technical Reports Server (NTRS)
Anderson, C.; Boullion, T.
1985-01-01
A problem which occurs in analyzing LANDSAT scenes is the problem of separating the components of a finite mixture of several distinct probability distributions. A review of the literature indicates this is a problem which occurs in many disciplines, such as engineering, biology, physiology and economics. Many approaches to this problem have appeared in the literature; however, most are very restrictive in their assumptions or have met with only a limited degree of success when applied to realistic situations. A proceudre is investigated with combines the k-L procedure of (Feurverger and McDunnough, 1981) with the MAICE procedure of (Akaike, 1974). The feasibility of this approach is being investigated numerically via the development of a computer software package enabling a simulation study and comparison with other procedures.
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
NASA Astrophysics Data System (ADS)
Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh
2016-09-01
Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.
D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc
2011-12-01
Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhou, Xiaoqing; Qin, Zhuanping; Zhao, Huijuan
2011-02-01
This article aims at the development of the fast inverse Monte Carlo (MC) simulation for the reconstruction of optical properties (absorption coefficient μs and scattering coefficient μs) of cylindrical tissue, such as a cervix, from the measurement of near infrared diffuse light on frequency domain. Frequency domain information (amplitude and phase) is extracted from the time domain MC with a modified method. To shorten the computation time in reconstruction of optical properties, efficient and fast forward MC has to be achieved. To do this, firstly, databases of the frequency-domain information under a range of μa and μs were pre-built by combining MC simulation with Lambert-Beer's law. Then, a double polynomial model was adopted to quickly obtain the frequency-domain information in any optical properties. Based on the fast forward MC, the optical properties can be quickly obtained in a nonlinear optimization scheme. Reconstruction resulting from simulated data showed that the developed inverse MC method has the advantages in both the reconstruction accuracy and computation time. The relative errors in reconstruction of the μs and μs are less than +/-6% and +/-12% respectively, while another coefficient (μs or μs) is in a fixed value. When both μs and μs are unknown, the relative errors in reconstruction of the reduced scattering coefficient and absorption coefficient are mainly less than +/-10% in range of 45< μs <80 cm-1 and 0.25< a μ <0.55 cm-1. With the rapid reconstruction strategy developed in this article the computation time for reconstructing one set of the optical properties is less than 0.5 second. Endoscopic measurement on two tubular solid phantoms were also carried out to evaluate the system and the inversion scheme. The results demonstrated that less than 20% relative error can be achieved.
Drost, B.W.; Ely, D.M.; Lum, W. E.
1999-01-01
The demand for water in Thurston County has increased steadily in recent years because of a rapid growth in population. Surface-water resources in the county have been fully appropriated for many years and Thurston County now relies entirely on ground water for new supplies of water. Thurston County is underlain by up to 2,000 feet of unconsolidated glacial and non-glacial Quaternary sediments which overlie consolidated rocks of Tertiary age. Six geohydrologic units have been identified within the unconsolidated sediments. Between 1988 and 1990, median water levels rose 0.6 to 1.9 feet in all geohydrologic units except bedrock, in which they declined 1.4 feet. Greater wet-season precipitation in 1990 (43 inches) than in 1988 (26 inches) was the probable cause of the higher 1990 water levels. Ground-water flow in the unconsolidated sediments underlying Thurston County was simulated with a computerized numerical model (MODFLOW). The model was constructed to simulate 1988 ground-water conditions as steady state. Simulated inflow to the model area from precipitation and secondary recharge was 620,000 acre-feet per year (93 percent), leakage from streams and lakes was 38,000 acre-ft/yr (6 percent), and ground water entering the model along the Chehalis River valley was 5,800 acre-ft/yr (1 percent). Simulated outflow from the model was primarily leakage to streams, springs, lakes, and seepage faces (500,000 acre-ft/yr or 75 percent of the total outflow). Submarine seepage to Puget Sound was simulated to be 88,000 acre-ft/yr (13 percent). Simulated ground-water discharge along the Chehalis River valley was simulated to be 12,000 acreft/yr (2 percent). Simulated withdrawals by wells for all purposes was 62,000 acre-ft/yr (9 percent). The numerical model was used to simulate the possible effects of increasing ground-water withdrawals by 23,000 acre-ft/yr above the 1988 rate of withdrawal. The model indicated that the increased withdrawals would come from reduced discharge to springs, seepage faces, and offshore (total of 51 percent of increased pumping) and decreased flow to rivers (46 percent). About 3 percent would come from increased leakage from rivers. Water levels would decline more than 1 foot over most of the model area, more than 10 feet over some areas, and would be at a maximum of about 35 feet. Contributing areas for water discharging at McAllister and Abbott Springs and to pumping centers near Tumwater and Lacey were estimated using a particle-tracking post-processing computer code (MODPATH) and a MODFLOW model calibrated to steady-state (1988) conditions. Water discharging at McAllister and Abbot Springs was determined to come from water entering the ground-water system at the water table in an area of about 20 square miles (mi2) to the west and south of the springs. This water is estimated to come from recharge (both precipitation and secondary) and from leakage from Lake St. Clair and several other surface-water bodies. Southeast of Lacey, about 3,800 acre-ft of ground water were pumped from five municipal wells during 1988. The source of the pumped water was determined to be an area that covers about 1.1 mi2. The water was estimated to come from recharge (both precipitation and secondary) and leakage from surface-water bodies. Along the lower Deschutes River nearly 3,900 acre-ft/yr of ground water were pumped during 1988 from 15 wells for municipal and industrial use. The calculated source of this water was an area that covers about 1.3 mi2. Within the calculated contributing area the pumped ground water comes from recharge (both precipitation and secondary) and leakage from the Deschutes River and several other surface-water bodies.
kmos: A lattice kinetic Monte Carlo framework
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten
2014-07-01
Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.
SUPERNOVA DRIVING. I. THE ORIGIN OF MOLECULAR CLOUD TURBULENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padoan, Paolo; Pan, Liubin; Haugbølle, Troels
2016-05-01
Turbulence is ubiquitous in molecular clouds (MCs), but its origin is still unclear because MCs are usually assumed to live longer than the turbulence dissipation time. Interstellar medium (ISM) turbulence is likely driven by supernova (SN) explosions, but it has never been demonstrated that SN explosions can establish and maintain a turbulent cascade inside MCs consistent with the observations. In this work, we carry out a simulation of SN-driven turbulence in a volume of (250 pc){sup 3}, specifically designed to test if SN driving alone can be responsible for the observed turbulence inside MCs. We find that SN driving establishesmore » a velocity scaling consistent with the usual scaling laws of supersonic turbulence, suggesting that previous idealized simulations of MC turbulence, driven with a random, large-scale volume force, were correctly adopted as appropriate models for MC turbulence, despite the artificial driving. We also find that the same scaling laws extend to the interiors of MCs, and that the velocity–size relation of the MCs selected from our simulation is consistent with that of MCs from the Outer-Galaxy Survey, the largest MC sample available. The mass–size relation and the mass and size probability distributions also compare successfully with those of the Outer Galaxy Survey. Finally, we show that MC turbulence is super-Alfvénic with respect to both the mean and rms magnetic-field strength. We conclude that MC structure and dynamics are the natural result of SN-driven turbulence.« less
Scaling up watershed model parameters--Flow and load simulations of the Edisto River Basin
Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul
2014-01-01
The Edisto River is the longest and largest river system completely contained in South Carolina and is one of the longest free flowing blackwater rivers in the United States. The Edisto River basin also has fish-tissue mercury concentrations that are some of the highest recorded in the United States. As part of an effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River basin, analyses and simulations of the hydrology of the Edisto River basin were made with the topography-based hydrological model (TOPMODEL). The potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River basin, was assessed. Scaling up was done in a step-wise process beginning with applying the calibration parameters, meteorological data, and topographic wetness index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made with subsequent simulations culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River basin and updated calibration parameters for some of the TOPMODEL calibration parameters. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the two models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the significant difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variables in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD-H, and LOADEST. Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek basin were used in the water-quality load models.
Amoush, Ahmad; Wilkinson, Douglas A.
2015-01-01
This work is a comparative study of the dosimetry calculated by Plaque Simulator, a treatment planning system for eye plaque brachytherapy, to the dosimetry calculated using Monte Carlo simulation for an Eye Physics model EP917 eye plaque. Monte Carlo (MC) simulation using MCNPX 2.7 was used to calculate the central axis dose in water for an EP917 eye plaque fully loaded with 17 IsoAid Advantage 125I seeds. In addition, the dosimetry parameters Λ, gL(r), and F(r,θ) were calculated for the IsoAid Advantage model IAI‐125 125I seed and benchmarked against published data. Bebig Plaque Simulator (PS) v5.74 was used to calculate the central axis dose based on the AAPM Updated Task Group 43 (TG‐43U1) dose formalism. The calculated central axis dose from MC and PS was then compared. When the MC dosimetry parameters for the IsoAid Advantage 125I seed were compared with the consensus values, Λ agreed with the consensus value to within 2.3%. However, much larger differences were found between MC calculated gL(r) and F(r,θ) and the consensus values. The differences between MC‐calculated dosimetry parameters are much smaller when compared with recently published data. The differences between the calculated central axis absolute dose from MC and PS ranged from 5% to 10% for distances between 1 and 12 mm from the outer scleral surface. When the dosimetry parameters for the 125I seed from this study were used in PS, the calculated absolute central axis dose differences were reduced by 2.3% from depths of 4 to 12 mm from the outer scleral surface. We conclude that PS adequately models the central dose profile of this plaque using its defaults for the IsoAid model IAI‐125 at distances of 1 to 7 mm from the outer scleral surface. However, improved dose accuracy can be obtained by using updated dosimetry parameters for the IsoAid model IAI‐125 125I seed. PACS number: 87.55.K‐ PMID:26699577
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y; Cai, J; Meltsner, S
2016-06-15
Purpose: The Varian tandem and ring applicators are used to deliver HDR Ir-192 brachytherapy for cervical cancer. The source path within the ring is hard to predict due to the larger interior ring lumen. Some studies showed the source could be several millimeters different from planned positions, while other studies demonstrated minimal dosimetric impact. A global shift can be applied to limit the effect of positioning offsets. The purpose of this study was to assess the necessities of implementing a global source shift using Monte Carlo (MC) simulations. Methods: The MCNP5 radiation transport code was used for all MC simulations.more » To accommodate TG-186 guidelines and eliminate inter-source attenuation, a BrachyVision plan with 10 dwell positions (0.5cm step sizes) was simulated as the summation of 10 individual sources with equal dwell times for simplification. To simplify the study, the tandem was also excluded from the MC model. Global shifts of ±0.1, ±0.3, ±0.5 cm were then simulated as distal and proximal from the reference positions. Dose was scored in water for all MC simulations and was normalized to 100% at the normalization point 0.5 cm from the cap in the ring plane. For dose comparison, Point A was 2 cm caudal from the buildup cap and 2 cm lateral on either side of the ring axis. With seventy simulations, 108 photon histories gave a statistical uncertainties (k=1) <2% for (0.1 cm)3 voxels. Results: Compared to no global shift, average Point A doses were 0.0%, 0.4%, and 2.2% higher for distal global shifts, and 0.4%, 2.8%, and 5.1% higher for proximal global shifts, respectively. The MC Point A doses differed by < 1% when compared to BrachyVision. Conclusion: Dose variations were not substantial for ±0.3 cm global shifts, which is common in clinical practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cawkwell, Marc Jon
2016-09-09
The MC3 code is used to perform Monte Carlo simulations in the isothermal-isobaric ensemble (constant number of particles, temperature, and pressure) on molecular crystals. The molecules within the periodic simulation cell are treated as rigid bodies, alleviating the requirement for a complex interatomic potential. Intermolecular interactions are described using generic, atom-centered pair potentials whose parameterization is taken from the literature [D. E. Williams, J. Comput. Chem., 22, 1154 (2001)] and electrostatic interactions arising from atom-centered, fixed, point partial charges. The primary uses of the MC3 code are the computation of i) the temperature and pressure dependence of lattice parameters andmore » thermal expansion coefficients, ii) tensors of elastic constants and compliances via the Parrinello and Rahman’s fluctuation formula [M. Parrinello and A. Rahman, J. Chem. Phys., 76, 2662 (1982)], and iii) the investigation of polymorphic phase transformations. The MC3 code is written in Fortran90 and requires LAPACK and BLAS linear algebra libraries to be linked during compilation. Computationally expensive loops are accelerated using OpenMP.« less
NASA Astrophysics Data System (ADS)
Chen, Zhe; Kecskes, Laszlo J.; Zhu, Kaigui; Wei, Qiuming
2016-12-01
Uniaxial tensile properties of monocrystalline tungsten (MC-W) and nanocrystalline tungsten (NC-W) with embedded hydrogen and helium atoms have been investigated using molecular dynamics (MD) simulations in the context of radiation damage evolution. Different strain rates have been imposed to investigate the strain rate sensitivity (SRS) of the samples. Results show that the plastic deformation processes of MC-W and NC-W are dominated by different mechanisms, namely dislocation-based for MC-W and grain boundary-based activities for NC-W, respectively. For MC-W, the SRS increases and a transition appears in the deformation mechanism with increasing embedded atom concentration. However, no obvious embedded atom concentration dependence of the SRS has been observed for NC-W. Instead, in the latter case, the embedded atoms facilitate GB sliding and intergranular fracture. Additionally, a strong strain enhanced He cluster growth has been observed. The corresponding underlying mechanisms are discussed.
Application of MC1 to Wind Cave National Park: Lessons from a small-scale study: Chapter 8
King, David A.; Bachelet, Dominique M.; Symstad, Amy J.
2015-01-01
MC1 was designed for application to large regions that include a wide range in elevation and topography, thereby encompassing a broad range in climates and vegetation types. The authors applied the dynamic global vegetation model MC1 to Wind Cave National Park (WCNP) in the southern Black Hills of South Dakota, USA, on the ecotone between ponderosa pine forest to the northwest and mixed-grass prairie to the southeast. They calibrated MC1 to simulate adequate fire effects in the warmer southeastern parts of the park to ensure grasslands there, while allowing forests to grow to the northwest, and then simulated future vegetation with climate projections from three GCMs. The results suggest that fire frequency, as affected by climate and/or human intervention, may be more important than the direct effects of climate in determining the distribution of ponderosa pine in the Black Hills region, both historically and in the future.
The first super geomagnetic storm of solar cycle 24: "The St. Patrick day (17 March 2015)" event
NASA Astrophysics Data System (ADS)
Wu, C. C.; Liou, K.; Socker, D. G.; Howard, R.; Jackson, B. V.; Yu, H. S.; Hutting, L.; Plunkett, S. P.
2015-12-01
The first super geomagnetic storm of solar cycle 24 occurred on the "St. Patrick's day" (17 March 2015). Notably, it was a two-step storm. The source of the storm can be traced back to the solar event on March 15, 2015. At ~2:10 UT on that day, SOHO/LASCO C3 recorded a partial halo corona mass ejection (CME) which was associated with a C9.1/1F flare (S22W25) and a series of type II/IV radio bursts. The propagation speed of this CME is estimated to be ~668 km/s during 02:10 - 06:20 UT (Figure 1). An interplanetary (IP) shock, likely driven by the CME, arrived at the Wind spacecraft at 03:59 UT on 17 March (Figure 2). The arrival of the IP shock at the Earth may have caused a sudden storm commencement (SSC) at 04:45 UT on March 17. The storm intensified (Dst dropped to -80 nT at ~10:00 UT) during the crossing of the CME sheath. Later, the storm recovered slightly (Dst ~ -50 nT) after the IMF turned northward. At 11:01 UT, IMF started turning southward again due to the large magnetic cloud (MC) field itself and caused the second storm intensification, reaching Dst = - 228 nT on March 18. We conclude that the St. Patrick day event is a two-step storm. The first step is associated with the sheath, whereas the second step is associated with the MC. Here, we employ a numerical simulation using the global, three-dimensional (3D), time-dependent, magnetohydrodynamic (MHD) model (H3DMHD, Wu et al. 2007) to study the CME propagation from the Sun to the Earth. The H3DMHD model has been modified so that it can be driven by (solar wind) data at the inner boundary of the computational domain. In this study, we use time varying, 3D solar wind velocity and density reconstructed from STELab, Japan interplanetary scintillation (IPS) data by the University of California, San Diego, and magnetic field at the IPS inner boundary provided by CSSS model closed-loop propagation (Jackson et a., 2015). The simulation result matches well with the in situ solar wind plasma and field data at Wind, in terms of the peak values of the IP shock and its arrival time (Figure 3). The simulation not only helps us to identify the driver of the IP shock, but also demonstrates that the modified H3DMHD model is capable of realistic simulations of large solar event. In this presentation, we will discuss the CME/storm event with detailed data from observations (Wind and SOHO) and our numerical simulation.
Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A
2014-01-01
The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2017-02-01
Current report considers development of a unified Monte Carlo (MC) -based computational model for simulation of propagation of Laguerre-Gaussian (LG) beams in turbid tissue-like scattering medium. With a primary goal to proof the concept of using complex light for tissue diagnosis we explore propagation of LG beams in comparison with Gaussian beams for both linear and circular polarization. MC simulations of radially and azimuthally polarized LG beams in turbid media have been performed, classic phenomena such as preservation of the orbital angular momentum, optical memory and helicity flip are observed, detailed comparison is presented and discussed.
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, A; Wu, Q; Sawkey, D
Purpose: DEAR is a radiation therapy technique utilizing synchronized motion of gantry and couch during delivery to optimize dose distribution homogeneity and penumbra for treatment of superficial disease. Dose calculation for DEAR is not yet supported by commercial TPSs. The purpose of this study is to demonstrate the feasibility of using a web-based Monte Carlo (MC) simulation tool (VirtuaLinac) to calculate dose distributions for a DEAR delivery. Methods: MC simulations were run through VirtuaLinac, which is based on the GEANT4 platform. VirtuaLinac utilizes detailed linac head geometry and material models, validated phase space files, and a voxelized phantom. The inputmore » was expanded to include an XML file for simulation of varying mechanical axes as a function of MU. A DEAR XML plan was generated and used in the MC simulation and delivered on a TrueBeam in Developer Mode. Radiographic film wrapped on a cylindrical phantom (12.5 cm radius) measured dose at a depth of 1.5 cm and compared to the simulation results. Results: A DEAR plan was simulated using an energy of 6 MeV and a 3×10 cm{sup 2} cut-out in a 15×15 cm{sup 2} applicator for a delivery of a 90° arc. The resulting data were found to provide qualitative and quantitative evidence that the simulation platform could be used as the basis for DEAR dose calculations. The resulting unwrapped 2D dose distributions agreed well in the cross-plane direction along the arc, with field sizes of 18.4 and 18.2 cm and penumbrae of 1.9 and 2.0 cm for measurements and simulations, respectively. Conclusion: Preliminary feasibility of a DEAR delivery using a web-based MC simulation platform has been demonstrated. This tool will benefit treatment planning for DEAR as a benchmark for developing other model based algorithms, allowing efficient optimization of trajectories, and quality assurance of plans without the need for extensive measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popescu, Florentin; Sen, Cengiz; Dagotto, Elbio R
2007-01-01
The crossover between an impurity band (IB) and a valence band (VB) regime as a function of the magnetic impurity concentration in a model for diluted magnetic semiconductors (DMSs) is studied systematically by taking into consideration the Coulomb attraction between the carriers and the magnetic impurities. The density of states and the ferromagnetic transition temperature of a spin-fermion model applied to DMSs are evaluated using dynamical mean-field theory and Monte Carlo (MC) calculations. It is shown that the addition of a square-well-like attractive potential can generate an IB at small enough Mn doping x for values of the p-d exchangemore » J that are not strong enough to generate one by themselves. We observe that the IB merges with the VB when x>=xc where xc is a function of J and the Coulomb strength V. Using MC simulations, we demonstrate that the range of the Coulomb attraction plays an important role. While the on-site attraction, which has been used in previous numerical simulations, effectively renormalizes J for all values of x, an unphysical result, a nearest-neighbor range attraction renormalizes J only at very low dopings, i.e., until the bound holes wave functions start to overlap. Thus, our results indicate that the Coulomb attraction can be neglected to study Mn-doped GaSb, GaAs, and GaP in the relevant doping regimes, but it should be included in the case of Mn-doped GaN, which is expected to be in the IB regime.« less
NASA Astrophysics Data System (ADS)
de Vita, Ruggero; Trenti, Michele; MacLeod, Morgan
2018-04-01
Despite recent observational efforts, unequivocal signs for the presence of intermediate-mass black holes (IMBHs) in globular clusters (GCs) have not been found yet. Especially when the presence of IMBHs is constrained through dynamical modelling of stellar kinematics, it is fundamental to account for the displacement that the IMBH might have with respect to the GC centre. In this paper, we analyse the IMBH wandering around the stellar density centre using a set of realistic direct N-body simulations of star cluster evolution. Guided by the simulation results, we develop a basic yet accurate model that can be used to estimate the average IMBH radial displacement (〈rbh〉) in terms of structural quantities as the core radius (rc), mass (Mc), and velocity dispersion (σc), in addition to the average stellar mass (mc) and the IMBH mass (Mbh). The model can be expressed by the equation < r_bh > /r_c=A(m_c/M_bh)^α [σ _c^2r_c/(GM_c)]^β, in which the free parameters A, α, and β are calculated through comparison with the numerical results on the IMBH displacement. The model is then applied to Galactic GCs, finding that for an IMBH mass equal to 0.1 per cent of the GC mass, the typical expected displacement of a putative IMBH is around 1 arcsec for most Galactic GCs, but IMBHs can wander to larger angular distances in some objects, including a prediction of a 2.5 arcsec displacement for NGC 5139 (ω Cen), and >10 arcsec for NGC5053, NGC6366, and ARP2.
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Multiple steady solutions in a driven cavity
NASA Astrophysics Data System (ADS)
Osman, Kahar; McHugh, John
2004-11-01
The symmetric driven cavity (Farias and McHugh, Phys. Fluids, 2002) in two and three dimensions is considered. Results are obtained via numerical computations of the Navier-Stokes equations, assuming constant density. The numerical algorithm is a splitting method, using finite differences. The forcing at the top is sinusoidal, and the forcing wavelength is allowed to vary in subsequent trials. The two dimensional results with 2, 4, and 6 oscillations in the forcing show a subcritical bifurcation to an asymmetric solution, with the Reynolds number as the important parameter. The symmetric solution is found to have vortex flow with streamlines that conform to the boundary shape. The asymmetric solution has vortex flow with streamlines that are approximately circular near the vortex center. Two dimensional results with 8 or more oscillations in the forcing show a supercritical bifurcation to an asymmetric solution. Three dimensional simulations show that the length ratios play a critical role, and the depth of the cavity must be large compared to the height in order to acheive the same subcritical bifurcation as with two dimensions.
Mechem, David B.; Giangrande, Scott E.; Wittman, Carly S.; ...
2015-03-13
A case of shallow cumulus and precipitating cumulus congestus sampled at the Atmospheric Radiation Measurement (ARM) Program Southern Great Plains (SGP) supersite is analyzed using a multi-sensor observational approach and numerical simulation. Observations from a new radar suite surrounding the facility are used to characterize the evolving statistical behavior of the precipitating cloud system. This is accomplished using distributions of different measures of cloud geometry and precipitation properties. Large-eddy simulation (LES) with size-resolved (bin) microphysics is employed to determine the forcings most important in producing the salient aspects of the cloud system captured in the radar observations. Our emphasis ismore » on assessing the importance of time-varying vs. steady-state large-scale forcing on the model's ability to reproduce the evolutionary behavior of the cloud system. Additional consideration is given to how the characteristic spatial scale and homogeneity of the forcing imposed on the simulation influences the evolution of cloud system properties. Results indicate that several new scanning radar estimates such as distributions of cloud top are useful to differentiate the value of time-varying (or at least temporally well-matched) forcing on LES solution fidelity.« less
The effect of linear spring number at side load of McPherson suspension in electric city car
NASA Astrophysics Data System (ADS)
Budi, Sigit Setijo; Suprihadi, Agus; Makhrojan, Agus; Ismail, Rifky; Jamari, J.
2017-01-01
The function of the spring suspension on Mc Pherson type is to control vehicle stability and increase ride convenience although having tendencies of side load presence. The purpose of this study is to obtain simulation results of Mc Pherson suspension spring in the electric city car by using the finite element method and determining the side load that appears on the spring suspension. This research is conducted in several stages; they are linear spring designing models with various spring coil and spring suspension modeling using FEM software. Suspension spring is compressed in the vertical direction (z-axis) and at the upper part of the suspension springs will be seen the force that arises towards the x, y, and z-axis to simulate the side load arising on the upper part of the spring. The results of FEM simulation that the side load on the spring toward the x and y-axis which the value gets close to zero is the most stable spring.
NASA Astrophysics Data System (ADS)
Ilyasov, Ildar K.; Prikhodko, Constantin V.; Nevorotin, Alexey J.
1995-01-01
Monte Carlo (MC) simulation model and the thermoindicative tissue phantom were applied for evaluation of a depth of tissue necrosis (DTN) as a result of quasi-cw copper vapor laser (578 nm) irradiation. It has been shown that incident light focusing angle is essential for DTN. In particular, there was a significant rise in DTN parallel to elevation of this angle up to +20 degree(s)C and +5 degree(s)C for both the MC simulation and tissue phantom models, respectively, with no further increase in the necrosis depth above these angles. It is to be noted that the relationship between focusing angles and DTN values was apparently stronger for the real target compared to the MC-derived hypothetical one. To what extent these date are applicable for medical practice can be evaluated in animal models which would simulate laser-assisted therapy for PWS or related dermatologic lesions with converged 578 nm laser beams.
BCA-kMC Hybrid Simulation for Hydrogen and Helium Implantation in Material under Plasma Irradiation
NASA Astrophysics Data System (ADS)
Kato, Shuichi; Ito, Atsushi; Sasao, Mamiko; Nakamura, Hiroaki; Wada, Motoi
2015-09-01
Ion implantation by plasma irradiation into materials achieves the very high concentration of impurity. The high concentration of impurity causes the deformation and the destruction of the material. This is the peculiar phenomena in the plasma-material interaction (PMI). The injection process of plasma particles are generally simulated by using the binary collision approximation (BCA) and the molecular dynamics (MD), while the diffusion of implanted atoms have been traditionally solved by the diffusion equation, in which the implanted atoms is replaced by the continuous concentration field. However, the diffusion equation has insufficient accuracy in the case of low concentration, and in the case of local high concentration such as the hydrogen blistering and the helium bubble. The above problem is overcome by kinetic Monte Carlo (kMC) which represents the diffusion of the implanted atoms as jumps on interstitial sites in a material. In this paper, we propose the new approach ``BCA-kMC hybrid simulation'' for the hydrogen and helium implantation under the plasma irradiation.
Magnetic Levitation of MC3T3 Osteoblast Cells as a Ground-Based Simulation of Microgravity
Kidder, Louis S.; Williams, Philip C.; Xu, Wayne Wenzhong
2009-01-01
Diamagnetic samples placed in a strong magnetic field and a magnetic field gradient experience a magnetic force. Stable magnetic levitation occurs when the magnetic force exactly counter balances the gravitational force. Under this condition, a diamagnetic sample is in a simulated microgravity environment. The purpose of this study is to explore if MC3T3-E1 osteoblastic cells can be grown in magnetically simulated hypo-g and hyper-g environments and determine if gene expression is differentially expressed under these conditions. The murine calvarial osteoblastic cell line, MC3T3-E1, grown on Cytodex-3 beads, were subjected to a net gravitational force of 0, 1 and 2 g in a 17 T superconducting magnet for 2 days. Microarray analysis of these cells indicated that gravitational stress leads to up and down regulation of hundreds of genes. The methodology of sustaining long-term magnetic levitation of biological systems are discussed. PMID:20052306
Influence of Sub-grid-Scale Isentropic Transports on McRAS Evaluations using ARM-CART SCM Datasets
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.; Tao, W. K.
2004-01-01
In GCM-physics evaluations with the currently available ARM-CART SCM datasets, McRAS produced very similar character of near surface errors of simulated temperature and humidity containing typically warm and moist biases near the surface and cold and dry biases aloft. We argued it must have a common cause presumably rooted in the model physics. Lack of vertical adjustment of horizontal transport was thought to be a plausible source. Clearly, debarring such a freedom would force the incoming air to diffuse into the grid-cell which would naturally bias the surface air to become warm and moist while the upper air becomes cold and dry, a characteristic feature of McRAS biases. Since, the errors were significantly larger in the two winter cases that contain potentially more intense episodes of cold and warm advective transports, it further reaffirmed our argument and provided additional motivation to introduce the corrections. When the horizontal advective transports were suitably modified to allow rising and/or sinking following isentropic pathways of subgrid scale motions, the outcome was to cool and dry (or warm and moisten) the lower (or upper) levels. Ever, crude approximations invoking such a correction reduced the temperature and humidity biases considerably. The tests were performed on all the available ARM-CART SCM cases with consistent outcome. With the isentropic corrections implemented through two different numerical approximations, virtually similar benefits were derived further confirming the robustness of our inferences. These results suggest the need for insentropic advective transport adjustment in a GCM due to subgrid scale motions.
The efficiency of close inbreeding to reduce genetic adaptation to captivity
Theodorou, K; Couvet, D
2015-01-01
Although ex situ conservation is indispensable for thousands of species, captive breeding is associated with negative genetic changes: loss of genetic variance and genetic adaptation to captivity that is deleterious in the wild. We used quantitative genetic individual-based simulations to model the effect of genetic management on the evolution of a quantitative trait and the associated fitness of wild-born individuals that are brought to captivity. We also examined the feasibility of the breeding strategies under a scenario of a large number of loci subject to deleterious mutations. We compared two breeding strategies: repeated half-sib mating and a method of minimizing mean coancestry (referred to as gc/mc). Our major finding was that half-sib mating is more effective in reducing genetic adaptation to captivity than the gc/mc method. Moreover, half-sib mating retains larger allelic and adaptive genetic variance. Relative to initial standing variation, the additive variance of the quantitative trait increased under half-sib mating during the sojourn in captivity. Although fragmentation into smaller populations improves the efficiency of the gc/mc method, half-sib mating still performs better in the scenarios tested. Half-sib mating shows two caveats that could mitigate its beneficial effects: low heterozygosity and high risk of extinction when populations are of low fecundity and size and one of the following conditions are met: (i) the strength of selection in captivity is comparable with that in the wild, (ii) deleterious mutations are numerous and only slightly deleterious. Experimental validation of half-sib mating is therefore needed for the advancement of captive breeding programs. PMID:25052417
SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research
NASA Astrophysics Data System (ADS)
Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.
2014-03-01
Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.
Novel Array-Based Target Identification for Synergistic Sensitization of Breast Cancer to Herceptin
2010-05-01
Tatsuya Azum, Eileen Adamson, Ryan Alipio, Becky Pio, Frank Jones, Dan Mercola. Chip- on- chip analysis of mechanism of action of HER2 inhibition in...Munawar, Kutbuddin S. Doctor, Michael Birrer, Michael McClelland, Eileen Adamson, Dan Mercola. Egr1 regulates the coordinated expression of numerous...Kemal Korkmaz, Mashide Ohmichi, Eileen Adamson, Michael McClelland, Dan Mercola. Identification of genes bound and regulated by ATF2/c-Jun
Optimal maintenance of a multi-unit system under dependencies
NASA Astrophysics Data System (ADS)
Sung, Ho-Joon
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Huang, Aiqun; Bhattacharya, Aniket; Binder, Kurt
2015-03-01
In this talk we compare the results obtained from Monte Carlo (MC) and Brownian dynamics (BD) simulation for the universal properties of a semi-flexible chain. Specifically we compare MC results obtained using pruned-enriched Rosenbluth method (PERM) with those obtained from BD simulation. We find that the scaled plot of root-mean-square (RMS) end-to-end distance
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Ion-mediated interactions in suspensions of oppositely charged nanoparticles
NASA Astrophysics Data System (ADS)
Dahirel, Vincent; Hansen, Jean Pierre
2009-08-01
The structure of oppositely charged spherical nanoparticles (polyions), dispersed in ionic solutions with continuous solvent (primitive model), is investigated by Monte Carlo (MC) simulations, within explicit and implicit microion representations, over a range of polyion valences and densities, and microion concentrations. Systems with explicit microions are explored by semigrand canonical MC simulations, and allow density-dependent effective polyion pair potentials vαβeff(r ) to be extracted from measured partial pair distribution functions. Implicit microion MC simulations are based on pair potentials of mean force vαβ(2)(r ) computed by explicit microion simulations of two charged polyions, in the low density limit. In the vicinity of the liquid-gas separation expected for oppositely charged polyions, the implicit microion representation leads to an instability against density fluctuations for polyion valences |Z| significantly below those at which the instability sets in within the exact explicit microion representation. Far from this instability region, the vαβ(2)(r ) are found to be fairly close to but consistently more repulsive than the effective pair potentials vαβeff(r ). This is corroborated by additional calculations of three-body forces between polyion triplets, which are repulsive when one polyion is of opposite charge to the other two. The explicit microion MC data were exploited to determine the ratio of salt concentrations c and co within the dispersion and the reservoir (Donnan effect). c /co is found to first increase before finally decreasing as a function of the polyion packing fraction.
Estimation of cardiac conductivities in ventricular tissue by a variational approach
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Veneziani, Alessandro
2015-11-01
The bidomain model is the current standard model to simulate cardiac potential propagation. The numerical solution of this system of partial differential equations strongly depends on the model parameters and in particular on the cardiac conductivities. Unfortunately, it is quite problematic to measure these parameters in vivo and even more so in clinical practice, resulting in no common agreement in the literature. In this paper we consider a variational data assimilation approach to estimating those parameters. We consider the parameters as control variables to minimize the mismatch between the computed and the measured potentials under the constraint of the bidomain system. The existence of a minimizer of the misfit function is proved with the phenomenological Rogers-McCulloch ionic model, that completes the bidomain system. We significantly improve the numerical approaches in the literature by resorting to a derivative-based optimization method with settlement of some challenges due to discontinuity. The improvement in computational efficiency is confirmed by a 2D test as a direct comparison with approaches in the literature. The core of our numerical results is in 3D, on both idealized and real geometries, with the minimal ionic model. We demonstrate the reliability and the stability of the conductivity estimation approach in the presence of noise and with an imperfect knowledge of other model parameters.
Structure and Stability of One-Dimensional Detonations in Ethylene-Air Mixtures
NASA Technical Reports Server (NTRS)
Yungster, S.; Radhakrishnan, K.; Perkins, High D. (Technical Monitor)
2003-01-01
The propagation of one-dimensional detonations in ethylene-air mixtures is investigated numerically by solving the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing scheme and a point implicit, first-order-accurate, time marching algorithm. The ethylene-air combustion is modeled with a 20-species, 36-step reaction mechanism. A multi-level, dynamically adaptive grid is utilized, in order to resolve the structure of the detonation. Parametric studies over an equivalence ratio range of 0.5 less than phi less than 3 for different initial pressures and degrees of detonation overdrive demonstrate that the detonation is unstable for low degrees of overdrive, but the dynamics of wave propagation varies with fuel-air equivalence ratio. For equivalence ratios less than approximately 1.2 the detonation exhibits a short-period oscillatory mode, characterized by high-frequency, low-amplitude waves. Richer mixtures (phi greater than 1.2) exhibit a low-frequency mode that includes large fluctuations in the detonation wave speed; that is, a galloping propagation mode is established. At high degrees of overdrive, stable detonation wave propagation is obtained. A modified McVey-Toong short-period wave-interaction theory is in excellent agreement with the numerical simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
Simulation - McCandless, Bruce (Syncom IV)
1985-04-15
S85-30800 (14 April 1985) --- Astronaut Bruce McCandless II tests one of the possible methods of attempting to activate a switch on the Syncom-IV (LEASAT) satellite released April 13 into space from the Space Shuttle Discovery. The communications spacecraft failed to behave properly upon release and NASA officials and satellite experts are considering possible means of repair. McCandless was using a full scale mockup of the satellite in the Johnson Space Center's (JSC) mockup and integration laboratory.
NASA Astrophysics Data System (ADS)
El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.
2015-10-01
A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.
Development of a Multi-Channel Piezoelectric Acoustic Sensor Based on an Artificial Basilar Membrane
Jung, Youngdo; Kwak, Jun-Hyuk; Lee, Young Hwa; Kim, Wan Doo; Hur, Shin
2014-01-01
In this research, we have developed a multi-channel piezoelectric acoustic sensor (McPAS) that mimics the function of the natural basilar membrane capable of separating incoming acoustic signals mechanically by their frequency and generating corresponding electrical signals. The McPAS operates without an external energy source and signal processing unit with a vibrating piezoelectric thin film membrane. The shape of the vibrating membrane was chosen to be trapezoidal such that different locations of membrane have different local resonance frequencies. The length of the membrane is 28 mm and the width of the membrane varies from 1 mm to 8 mm. Multiphysics finite element analysis (FEA) was carried out to predict and design the mechanical behaviors and piezoelectric response of the McPAS model. The designed McPAS was fabricated with a MEMS fabrication process based on the simulated results. The fabricated device was tested with a mouth simulator to measure its mechanical and piezoelectrical frequency response with a laser Doppler vibrometer and acoustic signal analyzer. The experimental results show that the as fabricated McPAS can successfully separate incoming acoustic signals within the 2.5 kHz–13.5 kHz range and the maximum electrical signal output upon acoustic signal input of 94 dBSPL was 6.33 mVpp. The performance of the fabricated McPAS coincided well with the designed parameters. PMID:24361926
NASA Astrophysics Data System (ADS)
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; Dagotto, Elbio
2015-06-01
Lattice spin-fermion models are important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the "spins," are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The "traveling cluster approximation" (TCA) is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 103 sites. In this publication, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. This allows us to solve generic spin-fermion models easily on 104 lattice sites and with some effort on 105 lattice sites, representing the record lattice sizes studied for this family of models.
Dose and scatter characteristics of a novel cone beam CT system for musculoskeletal extremities
NASA Astrophysics Data System (ADS)
Zbijewski, W.; Sisniega, A.; Vaquero, J. J.; Muhit, A.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Carrino, J. A.; Siewerdsen, J. H.
2012-03-01
A novel cone-beam CT (CBCT) system has been developed with promising capabilities for musculoskeletal imaging (e.g., weight-bearing extremities and combined radiographic / volumetric imaging). The prototype system demonstrates diagnostic-quality imaging performance, while the compact geometry and short scan orbit raise new considerations for scatter management and dose characterization that challenge conventional methods. The compact geometry leads to elevated, heterogeneous x-ray scatter distributions - even for small anatomical sites (e.g., knee or wrist), and the short scan orbit results in a non-uniform dose distribution. These complex dose and scatter distributions were investigated via experimental measurements and GPU-accelerated Monte Carlo (MC) simulation. The combination provided a powerful basis for characterizing dose distributions in patient-specific anatomy, investigating the benefits of an antiscatter grid, and examining distinct contributions of coherent and incoherent scatter in artifact correction. Measurements with a 16 cm CTDI phantom show that the dose from the short-scan orbit (0.09 mGy/mAs at isocenter) varies from 0.16 to 0.05 mGy/mAs at various locations on the periphery (all obtained at 80 kVp). MC estimation agreed with dose measurements within 10-15%. Dose distribution in patient-specific anatomy was computed with MC, confirming such heterogeneity and highlighting the elevated energy deposition in bone (factor of ~5-10) compared to soft-tissue. Scatter-to-primary ratio (SPR) up to ~1.5-2 was evident in some regions of the knee. A 10:1 antiscatter grid was found earlier to result in significant improvement in soft-tissue imaging performance without increase in dose. The results of MC simulations elucidated the mechanism behind scatter reduction in the presence of a grid. A ~3-fold reduction in average SPR was found in the MC simulations; however, a linear grid was found to impart additional heterogeneity in the scatter distribution, mainly due to the increase in the contribution of coherent scatter with increased spatial variation. Scatter correction using MC-generated scatter distributions demonstrated significant improvement in cupping and streaks. Physical experimentation combined with GPU-accelerated MC simulation provided a sophisticated, yet practical approach in identifying low-dose acquisition techniques, optimizing scatter correction methods, and evaluating patientspecific dose.
NASA Astrophysics Data System (ADS)
Guo, Liwen
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, the simulation transport delay remains a problem. Because of the limitations shown in the three prominent existing delay compensators---the lead/lag filter, the McFarland compensator and the Sobiski/Cardullo predictor---new approaches of compensating the transport delay in a flight simulator have been developed. The first novel compensator is the adaptive predictor making use of the Kalman filter algorithm in a unique manner so that the predictor can provide accurately the desired amount of prediction, significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors it illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator's control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Piloted simulation tests were conducted for assessing the effectiveness of the two novel compensators in comparison to the McFarland predictor and no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. Four metrics---the glide slope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating on the handling qualities---were employed for the analyses. The overall analyses show that while the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator, the state space predictor is fairly superior for short delay and significantly superior for long delay to the McFarland compensator. The state space predictor also achieves better compensation than the adaptive predictor. The results of the evaluation on the effectiveness of these predictors in the piloted tests agree with those in the theoretical offline tests conducted with the recorded simulation aircraft states.
Development of Simulation Methods in the Gibbs Ensemble to Predict Polymer-Solvent Phase Equilibria
NASA Astrophysics Data System (ADS)
Gartner, Thomas; Epps, Thomas; Jayaraman, Arthi
Solvent vapor annealing (SVA) of polymer thin films is a promising method for post-deposition polymer film morphology control. The large number of important parameters relevant to SVA (polymer, solvent, and substrate chemistries, incoming film condition, annealing and solvent evaporation conditions) makes systematic experimental study of SVA a time-consuming endeavor, motivating the application of simulation and theory to the SVA system to provide both mechanistic insight and scans of this wide parameter space. However, to rigorously treat the phase equilibrium between polymer film and solvent vapor while still probing the dynamics of SVA, new simulation methods must be developed. In this presentation, we compare two methods to study polymer-solvent phase equilibrium-Gibbs Ensemble Molecular Dynamics (GEMD) and Hybrid Monte Carlo/Molecular Dynamics (Hybrid MC/MD). Liquid-vapor equilibrium results are presented for the Lennard Jones fluid and for coarse-grained polymer-solvent systems relevant to SVA. We found that the Hybrid MC/MD method is more stable and consistent than GEMD, but GEMD has significant advantages in computational efficiency. We propose that Hybrid MC/MD simulations be used for unfamiliar systems in certain choice conditions, followed by much faster GEMD simulations to map out the remainder of the phase window.
NASA Astrophysics Data System (ADS)
Jover, J.; Haslam, A. J.; Galindo, A.; Jackson, G.; Müller, E. A.
2012-10-01
We present a continuous pseudo-hard-sphere potential based on a cut-and-shifted Mie (generalized Lennard-Jones) potential with exponents (50, 49). Using this potential one can mimic the volumetric, structural, and dynamic properties of the discontinuous hard-sphere potential over the whole fluid range. The continuous pseudo potential has the advantage that it may be incorporated directly into off-the-shelf molecular-dynamics code, allowing the user to capitalise on existing hardware and software advances. Simulation results for the compressibility factor of the fluid and solid phases of our pseudo hard spheres are presented and compared both to the Carnahan-Starling equation of state of the fluid and published data, the differences being indistinguishable within simulation uncertainty. The specific form of the potential is employed to simulate flexible chains formed from these pseudo hard spheres at contact (pearl-necklace model) for mc = 4, 5, 7, 8, 16, 20, 100, 201, and 500 monomer segments. The compressibility factor of the chains per unit of monomer, mc, approaches a limiting value at reasonably small values, mc < 50, as predicted by Wertheim's first order thermodynamic perturbation theory. Simulation results are also presented for highly asymmetric mixtures of pseudo hard spheres, with diameter ratios of 3:1, 5:1, 20:1 over the whole composition range.
NASA Astrophysics Data System (ADS)
Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.
2015-06-01
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall accuracy error of less than 10% can now be used for further MC simulation applications such as development of hardware components as well as for testing of new PET/MR software algorithms, such as assessment of point-spread function-based reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Jagiełowicz-Ryznar, C.
2016-12-01
The numerical calculations results of torsional vibration of the multi-cylinder crankshaft in the serial combustion engine (MC), including a viscous damper (VD), at complex forcing, were shown. In fact, in the MC case the crankshaft rotation forcings spectrum is the sum of harmonic forcing whose amplitude can be compared with the amplitude of the 1st harmonic. A significant impact, in the engine operational velocity, on the vibration damping process of MC, may be the amplitude of the 2nd harmonic of a forcing moment. The calculations results of MC vibration, depending on the amplitude of the 2nd harmonic of the forcing moment, for the first form of the torsional vibration, were shown. Higher forms of torsional vibrations have no practical significance. The calculations assume the optimum damping coefficient VD, when the simple harmonic forcing is equal to the base critical velocity of the MC crankshaft.
McIDAS-V: A Data Analysis and Visualization Tool for Global Satellite Data
NASA Astrophysics Data System (ADS)
Achtor, T. H.; Rink, T. D.
2011-12-01
The Man-computer Interactive Data Access System (McIDAS-V) is a java-based, open-source, freely available system for scientists, researchers and algorithm developers working with atmospheric data. The McIDAS-V software tools provide powerful new data manipulation and visualization capabilities, including 4-dimensional displays, an abstract data model with integrated metadata, user defined computation, and a powerful scripting capability. As such, McIDAS-V is a valuable tool for scientists and researchers within the GEO and GOESS domains. The advancing polar and geostationary orbit environmental satellite missions conducted by several countries will carry advanced instrumentation and systems that will collect and distribute land, ocean, and atmosphere data. These systems provide atmospheric and sea surface temperatures, humidity sounding, cloud and aerosol properties, and numerous other environmental products. This presentation will display and demonstrate some of the capabilities of McIDAS-V to analyze and display high temporal and spectral resolution data using examples from international environmental satellites.
Esther McCready, RN: Nursing Advocate for Civil Rights
Pollitt, Phoebe A
2016-02-15
More than a decade before the Civil Rights Act of 1964, as an African American teenager from Baltimore, Maryland, Esther McCready challenged the discriminatory admissions policies of the University of Maryland School of Nursing (UMSON). The article explores nurse advocacy and how Esther McCready advocated for herself and greater racial equity in nursing education during a time of civil rights turmoil. Her actions eventually resulted in the formation of numerous schools of nursing for African Americans across the south. This article recounts McCready’s early life experiences and the powerful impact her actions had on creating educational options for nurses during a time when they were severely limited for African American women, including discussion of her student days at UMSON and her journey after nursing school. A review of pertinent legal cases and policies related to segregation and integration of higher education in the mid-twentieth century is presented, along with details of McCready’s continued education and advocacy.
Bouhrara, Mustapha; Spencer, Richard G.
2015-01-01
Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810
Higo, Junichi; Umezawa, Koji
2014-01-01
We introduce computational studies on intrinsically disordered proteins (IDPs). Especially, we present our multicanonical molecular dynamics (McMD) simulations of two IDP-partner systems: NRSF-mSin3 and pKID-KIX. McMD is one of enhanced conformational sampling methods useful for conformational sampling of biomolecular systems. IDP adopts a specific tertiary structure upon binding to its partner molecule, although it is unstructured in the unbound state (i.e. the free state). This IDP-specific property is called "coupled folding and binding". The McMD simulation treats the biomolecules with an all-atom model immersed in an explicit solvent. In the initial configuration of simulation, IDP and its partner molecules are set to be distant from each other, and the IDP conformation is disordered. The computationally obtained free-energy landscape for coupled folding and binding has shown that native- and non-native-complex clusters distribute complicatedly in the conformational space. The all-atom simulation suggests that both of induced-folding and population-selection are coupled complicatedly in the coupled folding and binding. Further analyses have exemplified that the conformational fluctuations (dynamical flexibility) in the bound and unbound states are essentially important to characterize IDP functioning.
Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.
2016-01-01
This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.
SU-E-T-25: Real Time Simulator for Designing Electron Dual Scattering Foil Systems.
Carver, R; Hogstrom, K; Price, M; Leblanc, J; Harris, G
2012-06-01
To create a user friendly, accurate, real time computer simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator should allow for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator consists of an analytical algorithm for calculating electron fluence and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with a refined Moliere formalism for scattering powers. The simulator also estimates central-axis x-ray dose contamination from the dual foil system. Once the geometry of the beamline is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scattering foil material and Gaussian shape (thickness and sigma), and beam energy. The beam profile and x-ray contamination are displayed in real time. The simulator was tuned by comparison of off-axis electron fluence profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV and using present foils on the Elekta radiotherapy accelerator, the simulator profiles agreed to within 2% of MC profiles from within 20 cm of the central axis. The x-ray contamination predictions matched measured data to within 0.6%. The calculation time was approximately 100 ms using a single processor, which allows for real-time variation of foil parameters using sliding bars. A real time dual scattering foil system simulator has been developed. The tool has been useful in a project to redesign an electron dual scattering foil system for one of our radiotherapy accelerators. The simulator has also been useful as an instructional tool for our medical physics graduate students. © 2012 American Association of Physicists in Medicine.
Study on photon transport problem based on the platform of molecular optical simulation environment.
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.
Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737
Raman Monte Carlo simulation for light propagation for tissue with embedded objects
NASA Astrophysics Data System (ADS)
Periyasamy, Vijitha; Jaafar, Humaira Bte; Pramanik, Manojit
2018-02-01
Monte Carlo (MC) stimulation is one of the prominent simulation technique and is rapidly becoming the model of choice to study light-tissue interaction. Monte Carlo simulation for light transport in multi-layered tissue (MCML) is adapted and modelled with different geometry by integrating embedded objects of various shapes (i.e., sphere, cylinder, cuboid and ellipsoid) into the multi-layered structure. These geometries would be useful in providing a realistic tissue structure such as modelling for lymph nodes, tumors, blood vessels, head and other simulation medium. MC simulations were performed on various geometric medium. Simulation of MCML with embedded object (MCML-EO) was improvised for propagation of the photon in the defined medium with Raman scattering. The location of Raman photon generation is recorded. Simulations were experimented on a modelled breast tissue with tumor (spherical and ellipsoidal) and blood vessels (cylindrical). Results were presented in both A-line and B-line scans for embedded objects to determine spatial location where Raman photons were generated. Studies were done for different Raman probabilities.
Formalization and Validation of an SADT Specification Through Executable Simulation in VHDL
1991-12-01
be found in (39, 40, 41). One recent summary of the SADT methodology was written by Marca and McGowan in 1988 (.32). SADT is a methodology to provide...that is required. Also, the presence of "all" inputs and controls may not be needed for the activity to proceed. Marca and McGowan (32) describe a...diagrams which describe a complete system. Marca and McGowan define an SADT Model as: "a collection of carefully coorinated descriptions, starting from a
Astronaut William McArthur prepares for a training exercise
1993-07-20
S93-38686 (20 July 1993) --- Wearing a training version of the partial pressure launch and entry garment, astronaut William S. McArthur prepares to rehearse emergency egress procedures for the STS-58 mission. McArthur, along with the five other NASA astronauts and a visiting payload specialist assigned to the seven-member crew, later simulated contingency evacuation procedures. Most of the training session took place in the crew compartment and full fuselage trainers of the Space Shuttle mockup and integration laboratory.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
STS-31 MS McCandless and MS Sullivan during JSC WETF underwater simulation
1990-03-05
This overall view shows STS-31 Mission Specialist (MS) Bruce McCandless II (left) and MS Kathryn D. Sullivan making a practice space walk in JSC's Weightless Environment Training Facility (WETF) Bldg 29 pool. McCandless works with a mockup of the remote manipulator system (RMS) end effector which is attached to a grapple fixture on the Hubble Space Telescope (HST) mockup. Sullivan manipulates HST hardware on the Support System Module (SSM) forward shell. SCUBA-equipped divers monitor the extravehicular mobility unit (EMU) suited crewmembers during this simulated extravehicular activity (EVA). No EVA is planned for the Hubble Space Telescope (HST) deployment, but the duo has trained for contingencies which might arise during the STS-31 mission aboard Discovery, Orbiter Vehicle (OV) 103. Photo taken by NASA JSC photographer Sheri Dunnette.
STS-31 MS McCandless and MS Sullivan during JSC WETF underwater simulation
NASA Technical Reports Server (NTRS)
1990-01-01
This overall view shows STS-31 Mission Specialist (MS) Bruce McCandless II (left) and MS Kathryn D. Sullivan making a practice space walk in JSC's Weightless Environment Training Facility (WETF) Bldg 29 pool. McCandless works with a mockup of the remote manipulator system (RMS) end effector which is attached to a grapple fixture on the Hubble Space Telescope (HST) mockup. Sullivan manipulates HST hardware on the Support System Module (SSM) forward shell. SCUBA-equipped divers monitor the extravehicular mobility unit (EMU) suited crewmembers during this simulated extravehicular activity (EVA). No EVA is planned for the Hubble Space Telescope (HST) deployment, but the duo has trained for contingencies which might arise during the STS-31 mission aboard Discovery, Orbiter Vehicle (OV) 103. Photo taken by NASA JSC photographer Sheri Dunnette.
NASA Astrophysics Data System (ADS)
Drapek, R. J.; Kim, J. B.
2013-12-01
We simulated ecosystem response to climate change in the USA and Canada at a 5 arc-minute grid resolution using the MC1 dynamic global vegetation model and nine CMIP3 future climate projections as input. The climate projections were produced by 3 GCMs simulating 3 SRES emissions scenarios. We examined MC1 outputs for the conterminous USA by summarizing them by EPA level II and III ecoregions to characterize model skill and evaluate the magnitude and uncertainties of simulated ecosystem response to climate change. First, we evaluated model skill by comparing outputs from the recent historical period with benchmark datasets. Distribution of potential natural vegetation simulated by MC1 was compared with Kuchler's map. Above ground live carbon simulated by MC1 was compared with the National Biomass and Carbon Dataset. Fire return intervals calculated by MC1 were compared with maximum and minimum values compiled for the United States. Each EPA Level III Ecoregion was scored for average agreement with corresponding benchmark data and an average score was calculated for all three types of output. Greatest agreement with benchmark data happened in the Western Cordillera, the Ozark / Ouachita-Appalachian Forests, and the Southeastern USA Plains (EPA Level II Ecoregions). The lowest agreement happened in the Everglades and the Tamaulipas-Texas Semiarid Plain. For simulated ecosystem response to future climate projections we examined MC1 output for shifts in vegetation type, vegetation carbon, runoff, and biomass consumed by fire. Each ecoregion was scored for the amount of change from historical conditions for each variable and an average score was calculated. Smallest changes were forecast for Western Cordillera and Marine West Coast Forest ecosystems. Largest changes were forecast for the Cold Deserts, the Mixed Wood Plains, and the Central USA Plains. By combining scores of model skill for the historical period for each EPA Level 3 Ecoregion with scores representing the magnitude of ecosystem changes in the future, we identified high and low uncertainty ecoregions. The largest anticipated changes and the lowest measures of model skill coincide in the Central USA Plains and the Mixed Wood Plains. The combination of low model skill and high degree of ecosystem change elevate the importance of our uncertainty in this ecoregion. The highest projected changes coincide with relatively high model skill in the Cold Deserts. Climate adaptation efforts are the most likely to pay off in these regions. Finally, highest model skill and lowest anticipated changes coincide in the Western Cordillera and the Marine West Coast Forests. These regions may be relatively low-risk for climate change impacts when compared to the other ecoregions. These results represent only the first step in this type of analysis; there exist many ways to strengthen it. One, MC1 calibrations can be optimized using a structured optimization technique. Two, a larger set of climate projections can be used to capture a fuller range of GCMs and emissions scenarios. And three, employing an ensemble of vegetation models would make the analysis more robust.
NASA Astrophysics Data System (ADS)
Kerns, B. K.; Kim, J. B.; Day, M. A.; Pitts, B.; Drapek, R. J.
2017-12-01
Ecosystem process models are increasingly being used in regional assessments to explore potential changes in future vegetation and NPP due to climate change. We use the dynamic global vegetation model MAPSS-Century 2 (MC2) as one line of evidence for regional climate change vulnerability assessments for the US Forest Service, focusing our fine tuning model calibration from observational sources related to forest vegetation. However, there is much interest in understanding projected changes for arid rangelands in the western US such as grasslands, shrublands, and woodlands. Rangelands provide many ecosystem service benefits and local rural human community sustainability, habitat for threatened and endangered species, and are threatened by annual grass invasion. Past work suggested MC2 performance related to arid rangeland plant functional types (PFT's) was poor, and the model has difficulty distinguishing annual versus perennial grasslands. Our objectives are to increase the model performance for rangeland simulations and explore the potential for splitting the grass plant functional type into annual and perennial. We used the tri-state Blue Mountain Ecoregion as our study area and maps of potential vegetation from interpolated ground data, the National Land Cover Data Database, and ancillary NPP data derived from the MODIS satellite. MC2 historical simulations for the area overestimated woodland occurrence and underestimated shrubland and grassland PFT's. The spatial location of the rangeland PFT's also often did not align well with observational data. While some disagreement may be due to differences in the respective classification rules, the errors are largely linked to MC2's tree and grass biogeography and physiology algorithms. Presently, only grass and forest productivity measures and carbon stocks are used to distinguish PFT's. MC2 grass and tree productivity simulation is problematic, in particular grass seasonal phenology in relation to seasonal patterns of temperature and precipitation. The algorithm also does not accurately translate simulated carbon stocks into the canopy allometry of woodland tree species that dominate the BME, thereby inaccurately shading out the grasses in the understory. We are devising improvements to these shortcomings in the model architecture.
SU-E-T-155: Calibration of Variable Longitudinal Strength 103Pd Brachytherapy Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, J; Radtke, J; Micka, J
Purpose: Brachytherapy sources with variable longitudinal strength (VLS) allow for a customized intensity along the length of the source. These have applications in focal brachytherapy treatments of prostate cancer where dose boosting can be achieved through modulation of intra-source strengths. This work focused on development of a calibration methodology for VLS sources based on measurements and Monte Carlo (MC) simulations of five 1 cm {sup 10} {sup 3}Pd sources each containing four regions of variable {sup 103}Pd strength. Methods: The air-kerma strengths of the sources were measured with a variable-aperture free-air chamber (VAFAC). Source strengths were also measured using amore » well chamber. The in-air azimuthal and polar anisotropy of the sources were measured by rotating them in front of a NaI scintillation detector and were calculated with MC simulations. Azimuthal anisotropy results were normalized to their mean intensity values. Polar anisotropy results were normalized to their average transverse axis intensity values. The relative longitudinal strengths of the sources were measured via on-contact irradiations with radiochromic film, and were calculated with MC simulations. Results: The variable {sup 103}Pd loading of the sources was validated by VAFAC and well chamber measurements. Ratios of VAFAC air-kerma strengths and well chamber responses were within ±1.3% for all sources. Azimuthal anisotropy results indicated that ≥95% of the normalized values for all sources were within ±1.7% of the mean values. Polar anisotropy results indicated variations within ±0.3% for a ±7.6° angular region with respect to the source transverse axis. Locations and intensities of the {sup 103}Pd regions were validated by radiochromic film measurements and MC simulations. Conclusion: The calibration methodology developed in this work confirms that the VLS sources investigated have a high level of polar uniformity, and that the strength and longitudinal intensity can be verified experimentally and through MC simulations. {sup 103}Pd sources were provided by CivaTech Oncology, Inc.« less
A fragment-based approach to the SAMPL3 Challenge
NASA Astrophysics Data System (ADS)
Kulp, John L.; Blumenthal, Seth N.; Wang, Qiang; Bryan, Richard L.; Guarnieri, Frank
2012-05-01
The success of molecular fragment-based design depends critically on the ability to make predictions of binding poses and of affinity ranking for compounds assembled by linking fragments. The SAMPL3 Challenge provides a unique opportunity to evaluate the performance of a state-of-the-art fragment-based design methodology with respect to these requirements. In this article, we present results derived from linking fragments to predict affinity and pose in the SAMPL3 Challenge. The goal is to demonstrate how incorporating different aspects of modeling protein-ligand interactions impact the accuracy of the predictions, including protein dielectric models, charged versus neutral ligands, ΔΔGs solvation energies, and induced conformational stress. The core method is based on annealing of chemical potential in a Grand Canonical Monte Carlo (GC/MC) simulation. By imposing an initially very high chemical potential and then automatically running a sequence of simulations at successively decreasing chemical potentials, the GC/MC simulation efficiently discovers statistical distributions of bound fragment locations and orientations not found reliably without the annealing. This method accounts for configurational entropy, the role of bound water molecules, and results in a prediction of all the locations on the protein that have any affinity for the fragment. Disregarding any of these factors in affinity-rank prediction leads to significantly worse correlation with experimentally-determined free energies of binding. We relate three important conclusions from this challenge as applied to GC/MC: (1) modeling neutral ligands—regardless of the charged state in the active site—produced better affinity ranking than using charged ligands, although, in both cases, the poses were almost exactly overlaid; (2) simulating explicit water molecules in the GC/MC gave better affinity and pose predictions; and (3) applying a ΔΔGs solvation correction further improved the ranking of the neutral ligands. Using the GC/MC method under a variety of parameters in the blinded SAMPL3 Challenge provided important insights to the relevant parameters and boundaries in predicting binding affinities using simulated annealing of chemical potential calculations.
Numerical analysis of the Magnus moment on a spin-stabilized projectile
NASA Astrophysics Data System (ADS)
Cremins, Michael; Rodebaugh, Gregory; Verhulst, Claire; Benson, Michael; van Poppel, Bret
2016-11-01
The Magnus moment is a result of an uneven pressure distribution that occurs when an object rotates in a crossflow. Unlike the Magnus force, which is often small for spin-stabilized projectiles, the Magnus moment can have a strong detrimental effect on flight stability. According to one source, most transonic and subsonic flight instabilities are caused by the Magnus moment [Modern Exterior Ballistics, McCoy], and yet simulations often fail to accurately predict the Magnus moment in the subsonic regime. In this study, we present hybrid Reynolds Averaged Navier Stokes (RANS) and Large Eddy Simulation (LES) predictions of the Magnus moment for a spin-stabilized projectile. Velocity, pressure, and Magnus moment predictions are presented for multiple Reynolds numbers and spin rates. We also consider the effect of a sting mount, which is commonly used when conducting flow measurements in a wind tunnel or water channel. Finally, we present the initial designs for a novel Magnetic Resonance Velocimetry (MRV) experiment to measure three-dimensional flow around a spinning projectile. This work was supported by the Department of Defense High Performance Computing Modernization Program (DoD HPCMP).
Dosimetric study of GZP6 60 Co high dose rate brachytherapy source.
Lei, Qin; Xu, Anjian; Gou, Chengjun; Wen, Yumei; He, Donglin; Wu, Junxiang; Hou, Qing; Wu, Zhangwen
2018-05-28
The purpose of this study was to obtain dosimetric parameters of GZP6 60 Co brachytherapy source number 3. The Geant4 MC code has been used to obtain the dose rate distribution following the American Association of Physicists in Medicine (AAPM) TG-43U1 dosimetric formalism. In the simulation, the source was centered in a 50 cm radius water phantom. The cylindrical ring voxels were 0.1 mm thick for r ≤ 1 cm, 0.5 mm for 1 cm < r ≤ 5 cm, and 1 mm for r > 5 cm. The kerma-dose approximation was performed for r > 0.75 cm to increase the simulation efficiency. Based on the numerical results, the dosimetric datasets were obtained. These results were compared with the available data of the similar 60 Co high dose rate sources and the detailed dosimetric characterization was discussed. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Simulation and analysis of a proposed replacement for the McCook port of entry inspection station
DOT National Transportation Integrated Search
1999-04-01
This report describes a study of a proposed replacement for the McCook Port of Entry inspection station at the entry to South Dakota. In order to assess the potential for a low-speed weigh in motion (WIM) scale within the station to pre-screen trucks...
Using Computer-Based "Experiments" in the Analysis of Chemical Reaction Equilibria
ERIC Educational Resources Information Center
Li, Zhao; Corti, David S.
2018-01-01
The application of the Reaction Monte Carlo (RxMC) algorithm to standard textbook problems in chemical reaction equilibria is discussed. The RxMC method is a molecular simulation algorithm for studying the equilibrium properties of reactive systems, and therefore provides the opportunity to develop computer-based "experiments" for the…
Simulation of streamflow in the McTier Creek watershed, South Carolina
Feaster, Toby D.; Golden, Heather E.; Odom, Kenneth R.; Lowery, Mark A.; Conrads, Paul; Bradley, Paul M.
2010-01-01
The McTier Creek watershed is located in the Sand Hills ecoregion of South Carolina and is a small catchment within the Edisto River Basin. Two watershed hydrology models were applied to the McTier Creek watershed as part of a larger scientific investigation to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin. The two models are the topography-based hydrological model (TOPMODEL) and the grid-based mercury model (GBMM). TOPMODEL uses the variable-source area concept for simulating streamflow, and GBMM uses a spatially explicit modified curve-number approach for simulating streamflow. The hydrologic output from TOPMODEL can be used explicitly to simulate the transport of mercury in separate applications, whereas the hydrology output from GBMM is used implicitly in the simulation of mercury fate and transport in GBMM. The modeling efforts were a collaboration between the U.S. Geological Survey and the U.S. Environmental Protection Agency, National Exposure Research Laboratory. Calibrations of TOPMODEL and GBMM were done independently while using the same meteorological data and the same period of record of observed data. Two U.S. Geological Survey streamflow-gaging stations were available for comparison of observed daily mean flow with simulated daily mean flow-station 02172300, McTier Creek near Monetta, South Carolina, and station 02172305, McTier Creek near New Holland, South Carolina. The period of record at the Monetta gage covers a broad range of hydrologic conditions, including a drought and a significant wet period. Calibrating the models under these extreme conditions along with the normal flow conditions included in the record enhances the robustness of the two models. Several quantitative assessments of the goodness of fit between model simulations and the observed daily mean flows were done. These included the Nash-Sutcliffe coefficient of model-fit efficiency index, Pearson's correlation coefficient, the root mean square error, the bias, and the mean absolute error. In addition, a number of graphical tools were used to assess how well the models captured the characteristics of the observed data at the Monetta and New Holland streamflow-gaging stations. The graphical tools included temporal plots of simulated and observed daily mean flows, flow-duration curves, single-mass curves, and various residual plots. The results indicated that TOPMODEL and GBMM generally produced simulations that reasonably capture the quantity, variability, and timing of the observed streamflow. For the periods modeled, the total volume of simulated daily mean flows as compared to the total volume of the observed daily mean flow from TOPMODEL was within 1 to 5 percent, and the total volume from GBMM was within 1 to 10 percent. A noticeable characteristic of the simulated hydrographs from both models is the complexity of balancing groundwater recession and flow at the streamgage when flows peak and recede rapidly. However, GBMM results indicate that groundwater recession, which affects the receding limb of the hydrograph, was more difficult to estimate with the spatially explicit curve number approach. Although the purpose of this report is not to directly compare both models, given the characteristics of the McTier Creek watershed and the fact that GBMM uses the spatially explicit curve number approach as compared to the variable-source-area concept in TOPMODEL, GBMM was able to capture the flow characteristics reasonably well.
Correction for human head motion in helical x-ray CT
NASA Astrophysics Data System (ADS)
Kim, J.-H.; Sun, T.; Alcheikh, A. R.; Kuncic, Z.; Nuyts, J.; Fulton, R.
2016-02-01
Correction for rigid object motion in helical CT can be achieved by reconstructing from a modified source-detector orbit, determined by the object motion during the scan. This ensures that all projections are consistent, but it does not guarantee that the projections are complete in the sense of being sufficient for exact reconstruction. We have previously shown with phantom measurements that motion-corrected helical CT scans can suffer from data-insufficiency, in particular for severe motions and at high pitch. To study whether such data-insufficiency artefacts could also affect the motion-corrected CT images of patients undergoing head CT scans, we used an optical motion tracking system to record the head movements of 10 healthy volunteers while they executed each of the 4 different types of motion (‘no’, slight, moderate and severe) for 60 s. From these data we simulated 354 motion-affected CT scans of a voxelized human head phantom and reconstructed them with and without motion correction. For each simulation, motion-corrected (MC) images were compared with the motion-free reference, by visual inspection and with quantitative similarity metrics. Motion correction improved similarity metrics in all simulations. Of the 270 simulations performed with moderate or less motion, only 2 resulted in visible residual artefacts in the MC images. The maximum range of motion in these simulations would encompass that encountered in the vast majority of clinical scans. With severe motion, residual artefacts were observed in about 60% of the simulations. We also evaluated a new method of mapping local data sufficiency based on the degree to which Tuy’s condition is locally satisfied, and observed that areas with high Tuy values corresponded to the locations of residual artefacts in the MC images. We conclude that our method can provide accurate and artefact-free MC images with most types of head motion likely to be encountered in CT imaging, provided that the motion can be accurately determined.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.
NASA Astrophysics Data System (ADS)
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2008-02-01
Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.
NASA Astrophysics Data System (ADS)
Bernede, Adrien; Poëtte, Gaël
2018-02-01
In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.
Trends in the Brazil/Malvinas Confluence region
NASA Astrophysics Data System (ADS)
Combes, Vincent; Matano, Ricardo P.
2014-12-01
Observations show abrupt changes in the oceanic circulation of the southwestern Atlantic. These studies report a southward drift of the Brazil/Malvinas Confluence (BMC) and a change in the spectral characteristics of the Malvinas Current (MC) transport. We address the cause of these changes using the result of a high-resolution numerical experiment. The experiment, which is consistent with observations, shows a southward BMC displacement at a rate of 0.62°/decade between 1993 and 2008, and a shift of the spectral characteristics of the MC transport after 1999. We find that these changes are driven by a weakening of the northern branch of the Antarctic Circumpolar Current, which translates to a weakening of the MC transport and a southward BMC drift. The drift changes the spectral characteristics of the MC transport, which becomes more influenced by annual and semiannual variations associated with the BMC.
Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method
2016-01-01
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
NASA Astrophysics Data System (ADS)
Baranowski, D.; Waliser, D. E.; Jiang, X.
2016-12-01
One of the key challenges in subseasonal weather forecasting is the fidelity in representing the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC). In reality both propagating and non-propagating MJO events are observed, but in numerical forecast the latter group largely dominates. For this study, comprehensive model performances are evaluated using metrics that utilize the mean precipitation pattern and the amplitude and phase of the diurnal cycle, with a particular focus on the linkage between a model's local MC variability and its fidelity in representing propagation of the MJO and equatorial Kelvin waves across the MC. Subseasonal to seasonal variability of mean precipitation and its diurnal cycle in 20 year long climate simulations from over 20 general circulation models (GCMs) is examined to benchmark model performance. Our results show that many models struggle to represent the precipitation pattern over complex Maritime Continent terrain. Many models show negative biases of mean precipitation and amplitude of its diurnal cycle; these biases are often larger over land than over ocean. Furthermore, only a handful of models realistically represent the spatial variability of the phase of the diurnal cycle of precipitation. Models tend to correctly simulate the timing of the diurnal maximum of precipitation over ocean during local solar time morning, but fail to acknowledge influence of the land, with the timing of the maximum of precipitation there occurring, unrealistically, at the same time as over ocean. The day-to-day and seasonal variability of the mean precipitation follows observed patterns, but is often unrealistic for the diurnal cycle amplitude. The intraseasonal variability of the amplitude of the diurnal cycle of precipitation is mainly driven by model's ability (or lack of) to produce eastward propagating MJO-like signal. Our results show that many models tend to decrease apparent air-sea contrast in the mean precipitation and diurnal cycle of precipitation patterns over the Maritime Continent. As a result, the complexity of those patterns is heavily smoothed, to such an extent in some models that the Maritime Continent features and imprint is almost unrecognizable relative to the eastern Indian Ocean or Western Pacific.
On the definition of a Monte Carlo model for binary crystal growth.
Los, J H; van Enckevort, W J P; Meekes, H; Vlieg, E
2007-02-01
We show that consistency of the transition probabilities in a lattice Monte Carlo (MC) model for binary crystal growth with the thermodynamic properties of a system does not guarantee the MC simulations near equilibrium to be in agreement with the thermodynamic equilibrium phase diagram for that system. The deviations remain small for systems with small bond energies, but they can increase significantly for systems with large melting entropy, typical for molecular systems. These deviations are attributed to the surface kinetics, which is responsible for a metastable zone below the liquidus line where no growth occurs, even in the absence of a 2D nucleation barrier. Here we propose an extension of the MC model that introduces a freedom of choice in the transition probabilities while staying within the thermodynamic constraints. This freedom can be used to eliminate the discrepancy between the MC simulations and the thermodynamic equilibrium phase diagram. Agreement is achieved for that choice of the transition probabilities yielding the fastest decrease of the free energy (i.e., largest growth rate) of the system at a temperature slightly below the equilibrium temperature. An analytical model is developed, which reproduces quite well the MC results, enabling a straightforward determination of the optimal set of transition probabilities. Application of both the MC and analytical model to conditions well away from equilibrium, giving rise to kinetic phase diagrams, shows that the effect of kinetics on segregation is even stronger than that predicted by previous models.
NASA Astrophysics Data System (ADS)
Schiavi, A.; Senzacqua, M.; Pioli, S.; Mairani, A.; Magro, G.; Molinelli, S.; Ciocca, M.; Battistoni, G.; Patera, V.
2017-09-01
Ion beam therapy is a rapidly growing technique for tumor radiation therapy. Ions allow for a high dose deposition in the tumor region, while sparing the surrounding healthy tissue. For this reason, the highest possible accuracy in the calculation of dose and its spatial distribution is required in treatment planning. On one hand, commonly used treatment planning software solutions adopt a simplified beam-body interaction model by remapping pre-calculated dose distributions into a 3D water-equivalent representation of the patient morphology. On the other hand, Monte Carlo (MC) simulations, which explicitly take into account all the details in the interaction of particles with human tissues, are considered to be the most reliable tool to address the complexity of mixed field irradiation in a heterogeneous environment. However, full MC calculations are not routinely used in clinical practice because they typically demand substantial computational resources. Therefore MC simulations are usually only used to check treatment plans for a restricted number of difficult cases. The advent of general-purpose programming GPU cards prompted the development of trimmed-down MC-based dose engines which can significantly reduce the time needed to recalculate a treatment plan with respect to standard MC codes in CPU hardware. In this work, we report on the development of fred, a new MC simulation platform for treatment planning in ion beam therapy. The code can transport particles through a 3D voxel grid using a class II MC algorithm. Both primary and secondary particles are tracked and their energy deposition is scored along the trajectory. Effective models for particle-medium interaction have been implemented, balancing accuracy in dose deposition with computational cost. Currently, the most refined module is the transport of proton beams in water: single pencil beam dose-depth distributions obtained with fred agree with those produced by standard MC codes within 1-2% of the Bragg peak in the therapeutic energy range. A comparison with measurements taken at the CNAO treatment center shows that the lateral dose tails are reproduced within 2% in the field size factor test up to 20 cm. The tracing kernel can run on GPU hardware, achieving 10 million primary s-1 on a single card. This performance allows one to recalculate a proton treatment plan at 1% of the total particles in just a few minutes.
NASA Astrophysics Data System (ADS)
Magri, Fabien; Cacace, Mauro; Fischer, Thomas; Kolditz, Olaf; Wang, Wenqing; Watanabe, Norihiro
2017-04-01
In contrast to simple homogeneous 1D and 2D systems, no appropriate analytical solutions exist to test onset of thermal convection against numerical models of complex 3D systems that account for variable fluid density and viscosity as well as permeability heterogeneity (e.g. presence of faults). Owing to the importance of thermal convection for the transport of energy and minerals, the development of a benchmark test for density/viscosity driven flow is crucial to ensure that the applied numerical models accurately simulate the physical processes at hands. The presented study proposes a 3D test case for the simulation of thermal convection in a faulted system that accounts for temperature dependent fluid density and viscosity. The linear stability analysis recently developed by Malkovsky and Magri (2016) is used to estimate the critical Rayleigh number above which thermal convection of viscous fluids is triggered. The numerical simulations are carried out using the finite element technique. OpenGeoSys (Kolditz et al., 2012) and Moose (Gaston et al., 2009) results are compared to those obtained using the commercial software FEFLOW (Diersch, 2014) to test the ability of widely applied codes in matching both the critical Rayleigh number and the dynamical features of convective processes. The methodology and Rayleigh expressions given in this study can be applied to any numerical model that deals with 3D geothermal processes in faulted basins as by example the Tiberas Basin (Magri et al., 2016). References Kolditz, O., Bauer, S., Bilke, L., Böttcher, N., Delfs, J. O., Fischer, T., U. J. Görke, T. Kalbacher, G. Kosakowski, McDermott, C. I., Park, C. H., Radu, F., Rink, K., Shao, H., Shao, H.B., Sun, F., Sun, Y., Sun, A., Singh, K., Taron, J., Walther, M., Wang,W., Watanabe, N., Wu, Y., Xie, M., Xu, W., Zehner, B., 2012. OpenGeoSys: an open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THM/C) processes in porous media. Environmental Earth Sciences, 67(2), 589-599. Diersch, H. J, 2014. FEFLOW Finite Element Modeling of Flow, Mass and Heat Transport in Porous and Fractured Media, Springer-Verlag Berlin Heidelberg, ISBN 978-3-642-38738-8. Gaston D., Newman C., Hansen G., Lebrun-Grandie D, 2009. MOOSE: A parallel solution framework for coupled systems of nonlinear equations. Nucl. Engrg. Design, 239, 1,768-1778 Magri, F., Möller, S., Inbar, N., Möller, P., Raggad, M., Rödiger, T., Rosenthal, E., Siebert, C., 2016. 2D and 3D coexisting modes of thermal convection in fractured hydrothermal systems - Implications for transboundary flow in the Lower Yarmouk Gorge. Marine and Petroleum Geology 78, 750-758, DOI: /10.1016/j.marpetgeo.2016.10.002 Malkovsky, V. I., Magri, F., 2016. Thermal convection of temperature-dependent viscous fluids within three-dimensional faulted geothermal systems: estimation from linear and numerical analyses, Water Resour. Res., 52, 2855-2867, DOI:10.1002/2015WR018001.
NASA Astrophysics Data System (ADS)
Casagrande, F.; Souza, R.; Pezzi, L.
2013-05-01
In the Southwest Atlantic close to 40oS, the meeting of two ocean currents with distinct characteristics, the Brazil Current (BC), warm and saline, and the Malvinas Current (MC), cold and low salinity, resulting in strong activity marked by the formation of mesoscale eddies, this region is known as Brazil Malvinas Confluence (BMC). The INTERCONF project (Ocean Atmosphere Interaction over the region of CBM) perfoms since the 2002 data collection in situ radiosondes and XBTs onboard the Oceanographic Support Ship Ary Rongel during its trajectory of Brazil to the Antarctic continent. This paper analyzes the thermal contrast and ocean atmosphere coupling on the ocean front from the INTERCONF data, and compares the results to satellite data (QuikSCAT) and numerical models (Eta-CPTEC / INPE). The results indicate that the Sea Surface Temperature (SST) is driving the atmosphere, on the warm waters of the BC occurs an intensification of the winds and heat fluxes, and the reverse occurs on the cold waters of the MC. The data collected in 2009 include the presence of a warm core eddy (42 oS to 43.1 oS) which recorded higher values of heat fluxes and wind speed in relation to its surroundings. On the warm core eddy wind speed recorded was about 10 m.s-1, while on the BC and MC was approximately 7 m.s-1 and 2 m.s-1, respectively. Satellite data and numerical model tends to overestimate the wind speed data in the region in relation to data collected in situ. The heat flux data from the numerical model tend to increase over the warm waters and cold waters on the decline, though the amounts recorded by the model have low correlation.
Singh, Kunwar; Tiwari, Satish Chandra; Gupta, Maneesha
2014-01-01
The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C(2)MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C(2)MOS based flip-flop designs mC(2)MOSff1 and mC(2)MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC(2)MOSff1. Postlayout simulations indicate that mC(2)MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes.
Tiwari, Satish Chandra; Gupta, Maneesha
2014-01-01
The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C2MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C2MOS based flip-flop designs mC2MOSff1 and mC2MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC2MOSff1. Postlayout simulations indicate that mC2MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes. PMID:24723808
SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.
Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S
2012-06-01
With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.
Kern, Christoph
2016-03-23
This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.
Designing new guides and instruments using McStas
NASA Astrophysics Data System (ADS)
Farhi, E.; Hansen, T.; Wildes, A.; Ghosh, R.; Lefmann, K.
With the increasing complexity of modern neutron-scattering instruments, the need for powerful tools to optimize their geometry and physical performances (flux, resolution, divergence, etc.) has become essential. As the usual analytical methods reach their limit of validity in the description of fine effects, the use of Monte Carlo simulations, which can handle these latter, has become widespread. The McStas program was developed at Riso National Laboratory in order to provide neutron scattering instrument scientists with an efficient and flexible tool for building Monte Carlo simulations of guides, neutron optics and instruments [1]. To date, the McStas package has been extensively used at the Institut Laue-Langevin, Grenoble, France, for various studies including cold and thermal guides with ballistic geometry, diffractometers, triple-axis, backscattering and time-of-flight spectrometers [2]. In this paper, we present some simulation results concerning different guide geometries that may be used in the future at the Institut Laue-Langevin. Gain factors ranging from two to five may be obtained for the integrated intensities, depending on the exact geometry, the guide coatings and the source.
Teymouri, Manouchehr; Barati, Nastaran; Pirro, Matteo; Sahebkar, Amirhosein
2018-01-01
Dimethoxycurcumin (DiMC) is a synthetic analog of curcumin with superior inter-related pro-oxidant and anti-cancer activity, and metabolic stability. Numerous studies have shown that DiMC reserves the biologically beneficial features, including anti-inflammatory, anti-carcinogenic, and cytoprotective properties, almost to the same extent as curcumin exhibits. DiMC lacks the phenolic-OH groups as opposed to curcumin, dimethoxycurcumin, and bis-demethoxycurcumin that all vary in the number of methoxy groups per molecule, and has drawn the attentions of researchers who attempted to discover the structure-activity relationship (SAR) of curcumin. In this regard, tetrahydrocurcumin (THC), the reduced and biologically inert metabolite of curcumin, denotes the significance of the conjugated α,β diketone moiety for the curcumin activity. DiMC exerts unique molecular activities compared to curcumin, including induction of androgen receptor (AR) degradation and suppression of the transcription factor activator protein-1 (AP-1). The enhanced AR degradation on DiMC treatment suggests it as a novel anticancer agent against resistant tumors with androgenic etiology. Further, DiMC might be a potential treatment for acne vulgaris. DiMC induces epigenetic alteration more effectively than curcumin, although both showed no direct DNA hypomethylating activity. Given the metabolic stability, nanoparticulation of DiMC is more promising for in vivo effectiveness. However, studies in this regard are still in its infancy. In the current review, we portray the various molecular and biological functions of DiMC reported so far. Whenever possible, the efficiency is compared with curcumin and the reasons for DiMC being more metabolically stable are elaborated. We also provide future perspective investigations with respect to varying DiMC-nanoparticles. © 2016 Wiley Periodicals, Inc.
High-performance mc-Si ingot grown by modified DS system: Numerical investigation
NASA Astrophysics Data System (ADS)
Thiyagaragjan, M.; Aravindan, G.; Srinivasan, M.; Ramasamy, P.
2018-04-01
Numerical investigation is carried out on multi-crystalline silicon ingot grown by using side-top and side-bottom heaters and the temperature distribution, von Mises stress and maximum shear stress are analyzed. In order to analyze the changes, results from the side-top and side-bottom heaters are compared. The stress values are reduced, when the side-bottom heaters are placed. A 2D numerical approach is successfully applied to study the stress parameters in directional solidification silicon.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Charge Structure and Counterion Distribution in Hexagonal DNA Liquid Crystal
Dai, Liang; Mu, Yuguang; Nordenskiöld, Lars; Lapp, Alain; van der Maarel, Johan R. C.
2007-01-01
A hexagonal liquid crystal of DNA fragments (double-stranded, 150 basepairs) with tetramethylammonium (TMA) counterions was investigated with small angle neutron scattering (SANS). We obtained the structure factors pertaining to the DNA and counterion density correlations with contrast matching in the water. Molecular dynamics (MD) computer simulation of a hexagonal assembly of nine DNA molecules showed that the inter-DNA distance fluctuates with a correlation time around 2 ns and a standard deviation of 8.5% of the interaxial spacing. The MD simulation also showed a minimal effect of the fluctuations in inter-DNA distance on the radial counterion density profile and significant penetration of the grooves by TMA. The radial density profile of the counterions was also obtained from a Monte Carlo (MC) computer simulation of a hexagonal array of charged rods with fixed interaxial spacing. Strong ordering of the counterions between the DNA molecules and the absence of charge fluctuations at longer wavelengths was shown by the SANS number and charge structure factors. The DNA-counterion and counterion structure factors are interpreted with the correlation functions derived from the Poisson-Boltzmann equation, MD, and MC simulation. Best agreement is observed between the experimental structure factors and the prediction based on the Poisson-Boltzmann equation and/or MC simulation. The SANS results show that TMA is too large to penetrate the grooves to a significant extent, in contrast to what is shown by MD simulation. PMID:17098791
NASA Astrophysics Data System (ADS)
Federrath, C.; Roman-Duval, J.; Klessen, R. S.; Schmidt, W.; Mac Low, M.-M.
2010-03-01
Context. Density and velocity fluctuations on virtually all scales observed with modern telescopes show that molecular clouds (MCs) are turbulent. The forcing and structural characteristics of this turbulence are, however, still poorly understood. Aims: To shed light on this subject, we study two limiting cases of turbulence forcing in numerical experiments: solenoidal (divergence-free) forcing and compressive (curl-free) forcing, and compare our results to observations. Methods: We solve the equations of hydrodynamics on grids with up to 10243 cells for purely solenoidal and purely compressive forcing. Eleven lower-resolution models with different forcing mixtures are also analysed. Results: Using Fourier spectra and Δ-variance, we find velocity dispersion-size relations consistent with observations and independent numerical simulations, irrespective of the type of forcing. However, compressive forcing yields stronger compression at the same rms Mach number than solenoidal forcing, resulting in a three times larger standard deviation of volumetric and column density probability distributions (PDFs). We compare our results to different characterisations of several observed regions, and find evidence of different forcing functions. Column density PDFs in the Perseus MC suggest the presence of a mainly compressive forcing agent within a shell, driven by a massive star. Although the PDFs are close to log-normal, they have non-Gaussian skewness and kurtosis caused by intermittency. Centroid velocity increments measured in the Polaris Flare on intermediate scales agree with solenoidal forcing on that scale. However, Δ-variance analysis of the column density in the Polaris Flare suggests that turbulence is driven on large scales, with a significant compressive component on the forcing scale. This indicates that, although likely driven with mostly compressive modes on large scales, turbulence can behave like solenoidal turbulence on smaller scales. Principal component analysis of G216-2.5 and most of the Rosette MC agree with solenoidal forcing, but the interior of an ionised shell within the Rosette MC displays clear signatures of compressive forcing. Conclusions: The strong dependence of the density PDF on the type of forcing must be taken into account in any theory using the PDF to predict properties of star formation. We supply a quantitative description of this dependence. We find that different observed regions show evidence of different mixtures of compressive and solenoidal forcing, with more compressive forcing occurring primarily in swept-up shells. Finally, we emphasise the role of the sonic scale for protostellar core formation, because core formation close to the sonic scale would naturally explain the observed subsonic velocity dispersions of protostellar cores. A movie is only available in electronic form at http://www.aanda.org
Jover, J; Haslam, A J; Galindo, A; Jackson, G; Müller, E A
2012-10-14
We present a continuous pseudo-hard-sphere potential based on a cut-and-shifted Mie (generalized Lennard-Jones) potential with exponents (50, 49). Using this potential one can mimic the volumetric, structural, and dynamic properties of the discontinuous hard-sphere potential over the whole fluid range. The continuous pseudo potential has the advantage that it may be incorporated directly into off-the-shelf molecular-dynamics code, allowing the user to capitalise on existing hardware and software advances. Simulation results for the compressibility factor of the fluid and solid phases of our pseudo hard spheres are presented and compared both to the Carnahan-Starling equation of state of the fluid and published data, the differences being indistinguishable within simulation uncertainty. The specific form of the potential is employed to simulate flexible chains formed from these pseudo hard spheres at contact (pearl-necklace model) for m(c) = 4, 5, 7, 8, 16, 20, 100, 201, and 500 monomer segments. The compressibility factor of the chains per unit of monomer, m(c), approaches a limiting value at reasonably small values, m(c) < 50, as predicted by Wertheim's first order thermodynamic perturbation theory. Simulation results are also presented for highly asymmetric mixtures of pseudo hard spheres, with diameter ratios of 3:1, 5:1, 20:1 over the whole composition range.
NASA Astrophysics Data System (ADS)
Volpi, Giorgio; Crosta, Giovanni B.; Colucci, Francesca; Fischer, Thomas; Magri, Fabien
2017-04-01
Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. However, nowadays its utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. This is mainly due to the uncertainties associated with it, as for example the lack of appropriate computational tools, necessary to perform effective analyses. The aim of the present study is to build an accurate 3D numerical model, to simulate the exploitation process of the deep geothermal reservoir of Castel Giorgio - Torre Alfina (central Italy), and to compare results and performances of parallel simulations performed with TOUGH2 (Pruess et al. 1999), FEFLOW (Diersch 2014) and the open source software OpenGeoSys (Kolditz et al. 2012). Detailed geological, structural and hydrogeological data, available for the selected area since early 70s, show that Castel Giorgio - Torre Alfina is a potential geothermal reservoir with high thermal characteristics (120 ° C - 150 ° C) and fluids such as pressurized water and gas, mainly CO2, hosted in a carbonate formation. Our two steps simulations firstly recreate the undisturbed natural state of the considered system and then perform the predictive analysis of the industrial exploitation process. The three adopted software showed a strong numerical simulations accuracy, which has been verified by comparing the simulated and measured temperature and pressure values of the geothermal wells in the area. The results of our simulations have demonstrated the sustainability of the investigated geothermal field for the development of a 5 MW pilot plant with total fluids reinjection in the same original formation. From the thermal point of view, a very efficient buoyant circulation inside the geothermal system has been observed, thus allowing the reservoir to support the hypothesis of a 50 years production time with a flow rate of 1050 t/h. Furthermore, with the modeled distances our simulations showed no interference effects between the production and re-injection wells. Besides providing valuable guidelines for future exploitation of the Castel Giorgio - Torre Alfina deep geothermal reservoir, this example also highlights the large applicability and the high performance of the OpenGeoSys open-source code in handling coupled hydro-thermal simulations. REFERENCES Diersch, H. J. (2014). FEFLOW Finite Element Modeling of Flow, Mass and Heat Transport in Porous and Fractured Media, Springer-Verlag Berlin Heidelberg, ISBN 978-3-642-38738-8. Kolditz, O., Bauer, S., Bilke, L., Böttcher, N., Delfs, J. O., Fischer, T., U. J. Görke, T. Kalbacher, G. Kosakowski, McDermott, C. I., Park, C. H., Radu, F., Rink, K., Shao, H., Shao, H.B., Sun, F., Sun, Y., Sun, A., Singh, K., Taron, J., Walther, M., Wang,W., Watanabe, N., Wu, Y., Xie, M., Xu, W., Zehner, B. (2012). OpenGeoSys: an open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THM/C) processes in porous media. Environmental Earth Sciences, 67(2), 589-599. Pruess, K., Oldenburg, C. M., & Moridis, G. J. (1999). TOUGH2 user's guide version 2. Lawrence Berkeley National Laboratory.
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Lee, D.; Oreopoulos, L.; Barahona, D.; Nenes, A.; Suarez, M. J.
2012-01-01
A revised version of the Microphysics of clouds with Relaxed Arakawa-Schubert and Aerosol-Cloud interaction (McRAS-AC), including, among others, the Barahona and Nenes ice nucleation parameterization, is implemented in the GEOS-5 AGCM. Various fields from a 10-year long integration of the AGCM with McRAS-AC were compared with their counterparts from an integration of the baseline GEOS-5 AGCM, and with satellite data as observations. Generally using McRAS-AC reduced biases in cloud fields and cloud radiative effects are much better over most of the regions of the Earth. Two weaknesses are identified in the McRAS-AC runs, namely, too few cloud particles around 40S-60S, and too high cloud water path during northern hemisphere summer over the Gulf Stream and North Pacific. Sensitivity analyses showed that these biases potentially originated from biases in the aerosol input. The first bias is largely eliminated in a sensitivity test using 50% smaller aerosol particles, while the second bias is much reduced when interactive aerosol chemistry was turned on. The main drawback of McRAS-AC is dearth of low-level marine stratus clouds, probably due to lack of dry-convection, not yet implemented into the cloud scheme. Despite these biases, McRAS-AC does simulate realistic clouds and their optical properties that can improve with better aerosol-input and thereby has the potential to be a valuable tool for climate modeling research because of its aerosol indirect effect simulation capabilities involving prediction of cloud particle number concentration and effective particle size for both convective and stratiform clouds is quite realistic.
A medical image-based graphical platform -- features, applications and relevance for brachytherapy.
Fonseca, Gabriel P; Reniers, Brigitte; Landry, Guillaume; White, Shane; Bellezzo, Murillo; Antunes, Paula C G; de Sales, Camila P; Welteman, Eduardo; Yoriyaz, Hélio; Verhaegen, Frank
2014-01-01
Brachytherapy dose calculation is commonly performed using the Task Group-No 43 Report-Updated protocol (TG-43U1) formalism. Recently, a more accurate approach has been proposed that can handle tissue composition, tissue density, body shape, applicator geometry, and dose reporting either in media or water. Some model-based dose calculation algorithms are based on Monte Carlo (MC) simulations. This work presents a software platform capable of processing medical images and treatment plans, and preparing the required input data for MC simulations. The A Medical Image-based Graphical platfOrm-Brachytherapy module (AMIGOBrachy) is a user interface, coupled to the MCNP6 MC code, for absorbed dose calculations. The AMIGOBrachy was first validated in water for a high-dose-rate (192)Ir source. Next, dose distributions were validated in uniform phantoms consisting of different materials. Finally, dose distributions were obtained in patient geometries. Results were compared against a treatment planning system including a linear Boltzmann transport equation (LBTE) solver capable of handling nonwater heterogeneities. The TG-43U1 source parameters are in good agreement with literature with more than 90% of anisotropy values within 1%. No significant dependence on the tissue composition was observed comparing MC results against an LBTE solver. Clinical cases showed differences up to 25%, when comparing MC results against TG-43U1. About 92% of the voxels exhibited dose differences lower than 2% when comparing MC results against an LBTE solver. The AMIGOBrachy can improve the accuracy of the TG-43U1 dose calculation by using a more accurate MC dose calculation algorithm. The AMIGOBrachy can be incorporated in clinical practice via a user-friendly graphical interface. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Bhola, Ruchi; Bhalla, Swaran; Gupta, Radha; Singh, Ishwar; Kumar, Sunil
2014-05-01
Literature suggests that glottic view is better when using McGrath(®) Video laryngoscope and Truview(®) in comparison with McIntosh blade. The purpose of this study was to evaluate the effectiveness of McGrath Video laryngoscope in comparison with Truview laryngoscope for tracheal intubation in patients with simulated cervical spine injury using manual in-line stabilisation. This prospective randomised study was undertaken in operation theatre of a tertiary referral centre after approval from the Institutional Review Board. A total of 100 consenting patients presenting for elective surgery requiring tracheal intubation were randomly assigned to undergo intubation using McGrath(®) Video laryngoscope (n = 50) or Truview(®) (n = 50) laryngoscope. In all patients, we applied manual-in-line stabilisation of the cervical spine throughout the airway management. Statistical testing was conducted with the statistical package for the social science system version SPSS 17.0. Demographic data, airway assessment and haemodynamics were compared using the Chi-square test. A P < 0.05 was considered significant. The time to successful intubation was less with McGrath video laryngoscope when compared to Truview (30.02 s vs. 38.72 s). However, there was no significant difference between laryngoscopic views obtained in both groups. The number of second intubation attempts required and incidence of complications were negligible with both devices. Success rate of intubation with both devices was 100%. Intubation with McGrath Video laryngoscope caused lesser alterations in haemodynamics. Both laryngoscopes are reliable in case of simulated cervical spine injury using manual-in-line stabilisation with 100% success rate and good glottic view.
MMU development at the Martin Marietta plant in Denver, Colorado
1980-07-25
S80-36889 (24 July 1980) --- Astronaut Bruce McCandless II uses a simulator at Martin Marietta?s space center near Denver to develop flight techniques for a backpack propulsion unit that will be used on Space Shuttle flights. The manned maneuvering unit (MMU) training simulator allows astronauts to "fly missions" against a full scale mockup of a portion of the orbiter vehicle. Controls of the simulator are like those of the actual MMU. Manipulating them allows the astronaut to move in three straight-line directions and in pitch, yaw and roll. One possible application of the MMU is for an extravehicular activity chore to repair damaged tiles on the vehicle. McCandless is wearing an extravehicular mobility unit (EMU).
Feaster, Toby D.; Westcott, Nancy E.; Hudson, Robert J.M.; Conrads, Paul; Bradley, Paul M.
2012-01-01
Rainfall is an important forcing function in most watershed models. As part of a previous investigation to assess interactions among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations in the Edisto River Basin, the topography-based hydrological model (TOPMODEL) was applied in the McTier Creek watershed in Aiken County, South Carolina. Measured rainfall data from six National Weather Service (NWS) Cooperative (COOP) stations surrounding the McTier Creek watershed were used to calibrate the McTier Creek TOPMODEL. Since the 1990s, the next generation weather radar (NEXRAD) has provided rainfall estimates at a finer spatial and temporal resolution than the NWS COOP network. For this investigation, NEXRAD-based rainfall data were generated at the NWS COOP stations and compared with measured rainfall data for the period June 13, 2007, to September 30, 2009. Likewise, these NEXRAD-based rainfall data were used with TOPMODEL to simulate streamflow in the McTier Creek watershed and then compared with the simulations made using measured rainfall data. NEXRAD-based rainfall data for non-zero rainfall days were lower than measured rainfall data at all six NWS COOP locations. The total number of concurrent days for which both measured and NEXRAD-based data were available at the COOP stations ranged from 501 to 833, the number of non-zero days ranged from 139 to 209, and the total difference in rainfall ranged from -1.3 to -21.6 inches. With the calibrated TOPMODEL, simulations using NEXRAD-based rainfall data and those using measured rainfall data produce similar results with respect to matching the timing and shape of the hydrographs. Comparison of the bias, which is the mean of the residuals between observed and simulated streamflow, however, reveals that simulations using NEXRAD-based rainfall tended to underpredict streamflow overall. Given that the total NEXRAD-based rainfall data for the simulation period is lower than the total measured rainfall at the NWS COOP locations, this bias would be expected. Therefore, to better assess the use of NEXRAD-based rainfall estimates as compared to NWS COOP rainfall data on the hydrologic simulations, TOPMODEL was recalibrated and updated simulations were made using the NEXRAD-based rainfall data. Comparisons of observed and simulated streamflow show that the TOPMODEL results using measured rainfall data and NEXRAD-based rainfall are comparable. Nonetheless, TOPMODEL simulations using NEXRAD-based rainfall still tended to underpredict total streamflow volume, although the magnitude of differences were similar to the simulations using measured rainfall. The McTier Creek watershed was subdivided into 12 subwatersheds and NEXRAD-based rainfall data were generated for each subwatershed. Simulations of streamflow were generated for each subwatershed using NEXRAD-based rainfall and compared with subwatershed simulations using measured rainfall data, which unlike the NEXRAD-based rainfall were the same data for all subwatersheds (derived from a weighted average of the six NWS COOP stations surrounding the basin). For the two simulations, subwatershed streamflow were summed and compared to streamflow simulations at two U.S. Geological Survey streamgages. The percentage differences at the gage near Monetta, South Carolina, were the same for simulations using measured rainfall data and NEXRAD-based rainfall. At the gage near New Holland, South Carolina, the percentage differences using the NEXRAD-based rainfall were twice as much as those using the measured rainfall. Single-mass curve comparisons showed an increase in the total volume of rainfall from north to south. Similar comparisons of the measured rainfall at the NWS COOP stations showed similar percentage differences, but the NEXRAD-based rainfall variations occurred over a much smaller distance than the measured rainfall. Nonetheless, it was concluded that in some cases, using NEXRAD-based rainfall data in TOPMODEL streamflow simulations may provide an effective alternative to using measured rainfall data. For this investigation, however, TOPMODEL streamflow simulations using NEXRAD-based rainfall data for both calibration and simulations did not show significant improvements with respect to matching observed streamflow over simulations generated using measured rainfall data.
Microcystin distribution in physical size class separations of natural plankton communities
Graham, J.L.; Jones, J.R.
2007-01-01
Phytoplankton communities in 30 northern Missouri and Iowa lakes were physically separated into 5 size classes (>100 ??m, 53-100 ??m, 35-53 ??m, 10-35 ??m, 1-10 ??m) during 15-21 August 2004 to determine the distribution of microcystin (MC) in size fractionated lake samples and assess how net collections influence estimates of MC concentration. MC was detected in whole water (total) from 83% of takes sampled, and total MC values ranged from 0.1-7.0 ??g/L (mean = 0.8 ??g/L). On average, MC in the > 100 ??m size class comprised ???40% of total MC, while other individual size classes contributed 9-20% to total MC. MC values decreased with size class and were significantly greater in the >100 ??m size class (mean = 0.5 ??g /L) than the 35-53 ??m (mean = 0.1 ??g/L), 10-35 ??m (mean = 0.0 ??g/L), and 1-10 ??m (mean = 0.0 ??g/L) size classes (p < 0.01). MC values in nets with 100-??m, 53-??m, 35-??m, and 10-??m mesh were cumulatively summed to simulate the potential bias of measuring MC with various size plankton nets. On average, a 100-??m net underestimated total MC by 51%, compared to 37% for a 53-??m net, 28% for a 35-??m net, and 17% for a 10-??m net. While plankton nets consistently underestimated total MC, concentration of algae with net sieves allowed detection of MC at low levels (???0.01 ??/L); 93% of lakes had detectable levels of MC in concentrated samples. Thus, small mesh plankton nets are an option for documenting MC occurrence, but whole water samples should be collected to characterize total MC concentrations. ?? Copyright by the North American Lake Management Society 2007.
Radio Frequency Scanning and Simulation of Oriented Strand Board Material Property
NASA Astrophysics Data System (ADS)
Liu, Xiaojian; Zhang, Jilei; Steele, Philip. H.; Donohoe, J. Patrick
2008-02-01
Oriented strandboard (OSB) is a wood composite product with the largest market share in U.S. residential and commercial construction. Wood specific gravity (SG) and moisture content (MC) play an important role in the OSB manufacturing process. They are the two of the critical variables that manufacturers are required to monitor, locate, and control in order to produce a product with consistent quality. In this study, radio frequency scanning nondestructive evaluation (NDE) technologies evaluated the local area MC and SG of OSB panels following panel production by hot pressing. A finite element software simulation tool was used to optimize the sensor geometry and for investigating the interaction between electromagnetic field and wood dielectric properties. Our results indicate the RF scanning response is closely correlated to the MC and SG variations in OSB panels. Radio frequency NDE appears to have potential as an effective method for insuring OSB panel quality during manufacturing.
Some numerical methods for the Hele-Shaw equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, N.
1994-03-01
Tryggvason and Aref used a boundary integral method and the vortex-in-cell method to evolve the interface between two fluids in a Hele-Shaw cell. The method gives excellent results for intermediate values of the nondimensional surface tension parameter. The results are different from the predicted results of McLean and Saffman for small surface tension. For large surface tension, there are some numerical problems. In this paper, we implement the method of Tryggvason and Aref but use the point vortex method instead of the vortex-in-cell method. A parametric spline is used to represent the interface. The finger widths obtained agree well withmore » those predicted by McLean and Saffman. We conclude the the method of Tryggvason and Aref can provide excellent results but that the vortex-in-cell method may not be the method of choice for extreme values of the surface tension parameter. In a second method, we represent the interface with a Fourier representation. In addition, an alternative way of discretizing the boundary integral is used. Our results are compared to the linearized theory and the results of McLean and Saffman and are shown to be highly accurate. 21 refs., 4 figs., 2 tabs.« less
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; ...
2015-06-08
Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris
Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less
NASA Astrophysics Data System (ADS)
He, An; Gong, Jiaming; Shikazono, Naoki
2018-05-01
In the present study, a model is introduced to correlate the electrochemical performance of solid oxide fuel cell (SOFC) with the 3D microstructure reconstructed by focused ion beam scanning electron microscopy (FIB-SEM) in which the solid surface is modeled by the marching cubes (MC) method. Lattice Boltzmann method (LBM) is used to solve the governing equations. In order to maintain the geometries reconstructed by the MC method, local effective diffusivities and conductivities computed based on the MC geometries are applied in each grid, and partial bounce-back scheme is applied according to the boundary predicted by the MC method. From the tortuosity factor and overpotential calculation results, it is concluded that the MC geometry drastically improves the computational accuracy by giving more precise topology information.
Electrons to Reactors Multiscale Modeling: Catalytic CO Oxidation over RuO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutton, Jonathan E.; Lorenzi, Juan M.; Krogel, Jaron T.
First-principles kinetic Monte Carlo (1p-kMC) simulations for CO oxidation on two RuO 2 facets, RuO 2(110) and RuO 2(111), were coupled to the computational fluid dynamics (CFD) simulations package MFIX, and reactor-scale simulations were then performed. 1p-kMC coupled with CFD has recently been shown as a feasible method for translating molecular scale mechanistic knowledge to the reactor scale, enabling comparisons to in situ and online experimental measurements. Only a few studies with such coupling have been published. This work incorporates multiple catalytic surface facets into the scale-coupled simulation, and three possibilities were investigated: the two possibilities of each facet individuallymore » being the dominant phase in the reactor, and also the possibility that both facets were present on the catalyst particles in the ratio predicted by an ab initio thermodynamics-based Wulff construction. When lateral interactions between adsorbates were included in the 1p-kMC simulations, the two surfaces, RuO 2(110) and RuO 2(111), were found to be of similar order-of-magnitude in activity for the pressure range of 1 × 10 –4 bar to 1 bar, with the RuO 2(110) surface-termination showing more simulated activity than the RuO 2(111) surface-termination. Coupling between the 1p-kMC and CFD was achieved with a lookup table generated by the error-based modified Shepard interpolation scheme. Isothermal reactor scale simulations were performed and compared to two separate experimental studies, conducted with reactant partial pressures of ≤0.1 bar. Simulations without an isothermality restriction were also conducted and showed that the simulated temperature gradient across the catalytic reactor bed is <0.5 K, which validated the use of the isothermality restriction for investigating the reactor-scale phenomenological temperature dependences. The approach with the Wulff construction based reactor simulations reproduced a trend similar to one experimental data set relatively well, with the (110) surface being more active at higher temperaures; in contrast, for the other experimental data set, our reactor simulations achieve surprisingly and perhaps fortuitously good agreement with the activity and phenomenological pressure dependence when it is assumed that the (111) facet is the only active facet present. Lastly, the active phase of catalytic CO oxidation over RuO 2 remains unsettled, but the present study presents proof of principle (and progress) toward more accurate multiscale modeling from electrons to reactors and new simulation results.« less
Electrons to Reactors Multiscale Modeling: Catalytic CO Oxidation over RuO 2
Sutton, Jonathan E.; Lorenzi, Juan M.; Krogel, Jaron T.; ...
2018-04-20
First-principles kinetic Monte Carlo (1p-kMC) simulations for CO oxidation on two RuO 2 facets, RuO 2(110) and RuO 2(111), were coupled to the computational fluid dynamics (CFD) simulations package MFIX, and reactor-scale simulations were then performed. 1p-kMC coupled with CFD has recently been shown as a feasible method for translating molecular scale mechanistic knowledge to the reactor scale, enabling comparisons to in situ and online experimental measurements. Only a few studies with such coupling have been published. This work incorporates multiple catalytic surface facets into the scale-coupled simulation, and three possibilities were investigated: the two possibilities of each facet individuallymore » being the dominant phase in the reactor, and also the possibility that both facets were present on the catalyst particles in the ratio predicted by an ab initio thermodynamics-based Wulff construction. When lateral interactions between adsorbates were included in the 1p-kMC simulations, the two surfaces, RuO 2(110) and RuO 2(111), were found to be of similar order-of-magnitude in activity for the pressure range of 1 × 10 –4 bar to 1 bar, with the RuO 2(110) surface-termination showing more simulated activity than the RuO 2(111) surface-termination. Coupling between the 1p-kMC and CFD was achieved with a lookup table generated by the error-based modified Shepard interpolation scheme. Isothermal reactor scale simulations were performed and compared to two separate experimental studies, conducted with reactant partial pressures of ≤0.1 bar. Simulations without an isothermality restriction were also conducted and showed that the simulated temperature gradient across the catalytic reactor bed is <0.5 K, which validated the use of the isothermality restriction for investigating the reactor-scale phenomenological temperature dependences. The approach with the Wulff construction based reactor simulations reproduced a trend similar to one experimental data set relatively well, with the (110) surface being more active at higher temperaures; in contrast, for the other experimental data set, our reactor simulations achieve surprisingly and perhaps fortuitously good agreement with the activity and phenomenological pressure dependence when it is assumed that the (111) facet is the only active facet present. Lastly, the active phase of catalytic CO oxidation over RuO 2 remains unsettled, but the present study presents proof of principle (and progress) toward more accurate multiscale modeling from electrons to reactors and new simulation results.« less
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Du, X; Su, L
2014-06-15
Purpose: To compare the CT doses derived from the experiments and GPU-based Monte Carlo (MC) simulations, using a human cadaver and ATOM phantom. Methods: The cadaver of an 88-year old male and the ATOM phantom were scanned by a GE LightSpeed Pro 16 MDCT. For the cadaver study, the Thimble chambers (Model 10×5−0.6CT and 10×6−0.6CT) were used to measure the absorbed dose in different deep and superficial organs. Whole-body scans were first performed to construct a complete image database for MC simulations. Abdomen/pelvis helical scans were then conducted using 120/100 kVps, 300 mAs and a pitch factor of 1.375:1. Formore » the ATOM phantom study, the OSL dosimeters were used and helical scans were performed using 120 kVp and x, y, z tube current modulation (TCM). For the MC simulations, sufficient particles were run in both cases such that the statistical errors of the results by ARCHER-CT were limited to 1%. Results: For the human cadaver scan, the doses to the stomach, liver, colon, left kidney, pancreas and urinary bladder were compared. The difference between experiments and simulations was within 19% for the 120 kVp and 25% for the 100 kVp. For the ATOM phantom scan, the doses to the lung, thyroid, esophagus, heart, stomach, liver, spleen, kidneys and thymus were compared. The difference was 39.2% for the esophagus, and within 16% for all other organs. Conclusion: In this study the experimental and simulated CT doses were compared. Their difference is primarily attributed to the systematic errors of the MC simulations, including the accuracy of the bowtie filter modeling, and the algorithm to generate voxelized phantom from DICOM images. The experimental error is considered small and may arise from the dosimeters. R01 grant (R01EB015478) from National Institute of Biomedical Imaging and Bioengineering.« less
NASA Astrophysics Data System (ADS)
Woradit, Kampol; Guyot, Matthieu; Vanichchanunt, Pisit; Saengudomlert, Poompat; Wuttisittikulkij, Lunchakorn
While the problem of multicast routing and wavelength assignment (MC-RWA) in optical wavelength division multiplexing (WDM) networks has been investigated, relatively few researchers have considered network survivability for multicasting. This paper provides an optimization framework to solve the MC-RWA problem in a multi-fiber WDM network that can recover from a single-link failure with shared protection. Using the light-tree (LT) concept to support multicast sessions, we consider two protection strategies that try to reduce service disruptions after a link failure. The first strategy, called light-tree reconfiguration (LTR) protection, computes a new multicast LT for each session affected by the failure. The second strategy, called optical branch reconfiguration (OBR) protection, tries to restore a logical connection between two adjacent multicast members disconnected by the failure. To solve the MC-RWA problem optimally, we propose an integer linear programming (ILP) formulation that minimizes the total number of fibers required for both working and backup traffic. The ILP formulation takes into account joint routing of working and backup traffic, the wavelength continuity constraint, and the limited splitting degree of multicast-capable optical cross-connects (MC-OXCs). After showing some numerical results for optimal solutions, we propose heuristic algorithms that reduce the computational complexity and make the problem solvable for large networks. Numerical results suggest that the proposed heuristic yields efficient solutions compared to optimal solutions obtained from exact optimization.
Modelling the structural response of cotton plants to mepiquat chloride and population density
Gu, Shenghao; Evers, Jochem B.; Zhang, Lizhen; Mao, Lili; Zhang, Siping; Zhao, Xinhua; Liu, Shaodong; van der Werf, Wopke; Li, Zhaohu
2014-01-01
Background and Aims Cotton (Gossypium hirsutum) has indeterminate growth. The growth regulator mepiquat chloride (MC) is used worldwide to restrict vegetative growth and promote boll formation and yield. The effects of MC are modulated by complex interactions with growing conditions (nutrients, weather) and plant population density, and as a result the effects on plant form are not fully understood and are difficult to predict. The use of MC is thus hard to optimize. Methods To explore crop responses to plant density and MC, a functional–structural plant model (FSPM) for cotton (named CottonXL) was designed. The model was calibrated using 1 year's field data, and validated by using two additional years of detailed experimental data on the effects of MC and plant density in stands of pure cotton and in intercrops of cotton with wheat. CottonXL simulates development of leaf and fruits (square, flower and boll), plant height and branching. Crop development is driven by thermal time, population density, MC application, and topping of the main stem and branches. Key Results Validation of the model showed good correspondence between simulated and observed values for leaf area index with an overall root-mean-square error of 0·50 m2 m−2, and with an overall prediction error of less than 10 % for number of bolls, plant height, number of fruit branches and number of phytomers. Canopy structure became more compact with the decrease of leaf area index and internode length due to the application of MC. Moreover, MC did not have a substantial effect on boll density but increased lint yield at higher densities. Conclusions The model satisfactorily represents the effects of agronomic measures on cotton plant structure. It can be used to identify optimal agronomic management of cotton to achieve optimal plant structure for maximum yield under varying environmental conditions. PMID:24489020
NASA Astrophysics Data System (ADS)
Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.
2016-12-01
Disposal of high-level radioactive waste in a deep geological repository in crystalline host rock is one of the potential options for long term isolation. Characterization of the natural barrier system is an important component of the disposal option. In this study we present numerical modeling of flow and transport in fractured crystalline rock using an updated fracture continuum model (FCM). The FCM is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The original method by McKenna and Reeves (2005) has been updated to provide capabilities that enhance representation of fractured rock. As reported in Hadgu et al. (2015) the method was first modified to include fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation. More recently the FCM has been extended to include three different methods. (1) The Sequential Gaussian Simulation (SGSIM) method uses spatial correlation to generate fractures and define their properties for FCM (2) The ELLIPSIM method randomly generates a specified number of ellipses with properties defined by probability distributions. Each ellipse represents a single fracture. (3) Direct conversion of discrete fracture network (DFN) output. Test simulations were conducted to simulate flow and transport using ELLIPSIM and direct conversion of DFN methods. The simulations used a 1 km x 1km x 1km model domain and a structured with grid block of size of 10 m x 10m x 10m, resulting in a total of 106 grid blocks. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the different methods were applied to generate representative permeability fields. The PFLOTRAN (Hammond et al., 2014) code was used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains. SAND2016-7509 A
Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Stierstorfer, Karl; Kappler, Steffen
2016-12-01
An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the pixels. This is called double-counting with charge sharing. (A photoelectric effect with K-shell fluorescence x-ray emission would result in double-counting as well). As a result, PCD data are spatially and energetically correlated, although the output of individual PCD pixels is Poisson distributed. Major problems include the lack of a detector noise model for the spatio-energetic cross talk and lack of a computationally efficient simulation tool for generating correlated Poisson data. A Monte Carlo (MC) simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, the authors developed a new detector model and implemented it in an efficient software simulator that uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account: (1) detection efficiency; (2) incomplete charge collection and ballistic effect; (3) interaction with PCDs via photoelectric effect (with or without K-shell fluorescence x-ray emission, which may escape from the PCDs or be reabsorbed); and (4) electronic noise. The correlation was modeled by using these two simplifying assumptions: energy conservation and mutual exclusiveness. The mutual exclusiveness is that no more than two pixels measure energy from one photon. The effect of model parameters has been studied and results were compared with MC simulations. The agreement, with respect to the spectrum, was evaluated using the reduced χ 2 statistics or a weighted sum of squared errors, χ red 2 (≥1), where χ red 2 =1 indicates a perfect fit. The model produced spectra with flat field irradiation that qualitatively agree with previous studies. The spectra generated with different model and geometry parameters allowed for understanding the effect of the parameters on the spectrum and the correlation of data. The agreement between the model and MC data was very strong. The mean spectra with 90 keV and 140 kVp agreed exceptionally well: χ red 2 values were 1.049 with 90 keV data and 1.007 with 140 kVp data. The degrees of cross talk (in terms of the relative increase from single pixel irradiation to flat field irradiation) were 22% with 90 keV and 19% with 140 kVp for MC simulations, while they were 21% and 17%, respectively, for the model. The covariance was in strong agreement qualitatively, although it was overestimated. The noisy data generation was very efficient, taking less than a CPU minute as opposed to CPU hours for MC simulators. The authors have developed a novel, computationally efficient PCD model that takes into account double-counting and resulting spatio-energetic correlation between PCD pixels. The MC simulation validated the accuracy.
NASA Astrophysics Data System (ADS)
Dal Molin, J. P.; Caliri, A.
2018-01-01
Here we focus on the conformational search for the native structure when it is ruled by the hydrophobic effect and steric specificities coming from amino acids. Our main tool of investigation is a 3D lattice model provided by a ten-letter alphabet, the stereochemical model. This minimalist model was conceived for Monte Carlo (MC) simulations when one keeps in mind the kinetic behavior of protein-like chains in solution. We have three central goals here. The first one is to characterize the folding time (τ) by two distinct sampling methods, so we present two sets of 103 MC simulations for a fast protein-like sequence. The resulting sets of characteristic folding times, τ and τq were obtained by the application of the standard Metropolis algorithm (MA), as well as by an enhanced algorithm (Mq A). The finding for τq shows two things: (i) the chain-solvent hydrophobic interactions {hk } plus a set of inter-residues steric constraints {ci,j } are able to emulate the conformational search for the native structure. For each one of the 103MC performed simulations, the target is always found within a finite time window; (ii) the ratio τq / τ ≅ 1 / 10 suggests that the effect of local thermal fluctuations, encompassed by the Tsallis weight, provides to the chain an innate efficiency to escape from energetic and steric traps. We performed additional MC simulations with variations of our design rule to attest this first result, both algorithms the MA and the Mq A were applied to a restricted set of targets, a physical insight is provided. Our second finding was obtained by a set of 600 independent MC simulations, only performed with the Mq A applied to an extended set of 200 representative targets, our native structures. The results show how structural patterns should modulate τq, which cover four orders of magnitude; this finding is our second goal. The third, and last result, was obtained with a special kind of simulation performed with the purpose to explore a possible connection between the hydrophobic component of protein stability and the native structural topology. We simulated those same 200 targets again with the Mq A, only. However, this time we evaluated the relative frequency {ϕq } in which each target visits its corresponding native structure along an appropriate simulation time. Due to the presence of the hydrophobic effect in our approach we obtained a strong correlation between the stability and the folding rate (R = 0 . 85). So, as faster a sequence found its target, as larger is the hydrophobic component of its stability. The strong correlation fulfills our last goal. This final finding suggests that the hydrophobic effect could not be a general stabilizing factor for proteins.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z Y; Kremer, Kurt; Daoulas, Kostas Ch
2016-11-14
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
NASA Astrophysics Data System (ADS)
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerstein, Alan R.; Sayler, B. J.; Wunsch, S.
2010-05-01
Recent work suggests that cloud effects remain one of the largest sources of uncertainty in model-based estimates of climate sensitivity. In particular, the entrainment rate in stratocumulus-topped mixed layers needs better models. More than thirty years ago a clever laboratory experiment was conducted by McEwan and Paltridge to examine an analog of the entrainment process at the top of stratiform clouds. Sayler and Breidenthal extended this pioneering work and determined the effect of the Richardson number on the dimensionless entrainment rate. The experiments gave hints that the interaction between molecular effects and the one-sided turbulence seems to be crucial formore » understanding entrainment. From the numerical point of view large-eddy simulation (LES) does not allow explicitly resolving all the fine scale processes at the entrainment interface. Direct numerical simulation (DNS) is limited due to the Reynolds number and is not the tool of choice for parameter studies. Therefore it is useful to investigate new modeling strategies, such as stochastic turbulence models which allow sufficient resolution at least in one dimension while having acceptable run times. We will present results of the One-Dimensional Turbulence stochastic simulation model applied to the experimental setup of Sayler and Breidenthal. The results on radiatively induced entrainment follow quite well the scaling of the entrainment rate with the Richardson number that was experimentally found for a set of trials. Moreover, we investigate the influence of molecular effects, the fluids optical properties, and the artifact of parasitic turbulence experimentally observed in the laminar layer. In the simulations the parameters are varied systematically for even larger ranges than in the experiment. Based on the obtained results a more complex parameterization of the entrainment rate than currently discussed in the literature seems to be necessary.« less
Newtonian CAFE: a new ideal MHD code to study the solar atmosphere
NASA Astrophysics Data System (ADS)
González, J. J.; Guzmán, F.
2015-12-01
In this work we present a new independent code designed to solve the equations of classical ideal magnetohydrodynamics (MHD) in three dimensions, submitted to a constant gravitational field. The purpose of the code centers on the analysis of solar phenomena within the photosphere-corona region. In special the code is capable to simulate the propagation of impulsively generated linear and non-linear MHD waves in the non-isothermal solar atmosphere. We present 1D and 2D standard tests to demonstrate the quality of the numerical results obtained with our code. As 3D tests we present the propagation of MHD-gravity waves and vortices in the solar atmosphere. The code is based on high-resolution shock-capturing methods, uses the HLLE flux formula combined with Minmod, MC and WENO5 reconstructors. The divergence free magnetic field constraint is controlled using the Flux Constrained Transport method.
Shear failure of granular materials
NASA Astrophysics Data System (ADS)
Degiuli, Eric; Balmforth, Neil; McElwaine, Jim; Schoof, Christian; Hewitt, Ian
2012-02-01
Connecting the macroscopic behavior of granular materials with the microstructure remains a great challenge. Recent work connects these scales with a discrete calculus [1]. In this work we generalize this formalism from monodisperse packings of disks to 2D assemblies of arbitrarily shaped grains. In particular, we derive Airy's expression for a symmetric, divergence-free stress tensor. Using these tools, we derive, from first-principles and in a mean-field approximation, the entropy of frictional force configurations in the Force Network Ensemble. As a macroscopic consequence of the Coulomb friction condition at contacts, we predict shear failure at a critical shear stress, in accordance with the Mohr-Coulomb failure condition well known in engineering. Results are compared with numerical simulations, and the dependence on the microscopic geometric configuration is discussed. [4pt] [1] E. DeGiuli & J. McElwaine, PRE 2011. doi: 10.1103/PhysRevE.84.041310
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
NASA Astrophysics Data System (ADS)
Dünser, Simon; Meyer, Daniel W.
2016-06-01
In most groundwater aquifers, dispersion of tracers is dominated by flow-field inhomogeneities resulting from the underlying heterogeneous conductivity or transmissivity field. This effect is referred to as macrodispersion. Since in practice, besides a few point measurements the complete conductivity field is virtually never available, a probabilistic treatment is needed. To quantify the uncertainty in tracer concentrations from a given geostatistical model for the conductivity, Monte Carlo (MC) simulation is typically used. To avoid the excessive computational costs of MC, the polar Markovian velocity process (PMVP) model was recently introduced delivering predictions at about three orders of magnitude smaller computing times. In artificial test cases, the PMVP model has provided good results in comparison with MC. In this study, we further validate the model in a more challenging and realistic setup. The setup considered is derived from the well-known benchmark macrodispersion experiment (MADE), which is highly heterogeneous and non-stationary with a large number of unevenly scattered conductivity measurements. Validations were done against reference MC and good overall agreement was found. Moreover, simulations of a simplified setup with a single measurement were conducted in order to reassess the model's most fundamental assumptions and to provide guidance for model improvements.
Gravity affects the responsiveness of Runx2 to 1, 25-dihydroxyvitamin D3 (VD3)
NASA Astrophysics Data System (ADS)
Guo, Feima; Dai, Zhongquan; Wu, Feng; Liu, Zhaoxia; Tan, Yingjun; Wan, Yumin; Shang, Peng; Li, Yinghui
2013-03-01
Bone loss resulting from spaceflight is mainly caused by decreased bone formation, and decreased osteoblast proliferation and differentiation. Transcription factor Runx2 plays an important role in osteoblast differentiation and function by responding to microenvironment changes including cytokine and mechanical factors. The effects of 1, 25-dihydroxyvitamin D3 (VD3) on Runx2 in terms of mechanical competence is far less clear. This study describes how gravity affects the response of Runx2 to VD3. A MC3T3-6OSE2-Luc osteoblast model was constructed in which the activity of Runx2 was reflected by reporter luciferase activity identifed by bone-related cytokines. The results showed that luciferase activity in MC3T3-6OSE2-Luc cells transfected with Runx2 was twice that of the vacant vector. Alkaline phosphatase (ALP) activity was increased in MC3T3-6OSE2-Luc cells by different concentrations of IGF-I and BMP2. MC3T3-6OSE2-Luc cells were cultured under simulated microgravity or centrifuge with or without VD3. In simulated microgravity, luciferase activity was decreased after 48 h of clinorotation culture, but increased in the centrifuge culture. Luciferase activity was increased after VD3 treatment in normal conditions and simulated microgravity, the increase in luciferase activity in simulated microgravity was lower than that in the 1 g condition when simultaneously treated with VD3 and higher than that in the centrifuge condition. Co-immunoprecipitation showed that the interaction between the VD3 receptor (VDR) and Runx2 was decreased by simulated microgravity, but increased by centrifugation. From these results, we conclude that gravity affects the response of Runx2 to VD3 which results from an alteration in the interaction between VDR and Runx2 under different gravity conditions.
Importance of including ammonium sulfate ((NH4)2SO4) aerosols for ice cloud parameterization in GCMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharjee, P. S.; Sud, Yogesh C.; Liu, Xiaohong
2010-02-22
A common deficiency of many cloud-physics parameterizations including the NASA’s microphysics of clouds with aerosol- cloud interactions (hereafter called McRAS-AC) is that they simulate less (larger) than the observed ice cloud particle number (size). A single column model (SCM) of McRAS-AC and Global Circulation Model (GCM) physics together with an adiabatic parcel model (APM) for ice-cloud nucleation (IN) of aerosols were used to systematically examine the influence of ammonium sulfate ((NH4)2SO4) aerosols, not included in the present formulations of McRAS-AC. Specifically, the influence of (NH4)2SO4 aerosols on the optical properties of both liquid and ice clouds were analyzed. First anmore » (NH4)2SO4 parameterization was included in the APM to assess its effect vis-à-vis that of the other aerosols. Subsequently, several evaluation tests were conducted over the ARM-SGP and thirteen other locations (sorted into pristine and polluted conditions) distributed over marine and continental sites with the SCM. The statistics of the simulated cloud climatology were evaluated against the available ground and satellite data. The results showed that inclusion of (NH4)2SO4 in the SCM made a remarkable improvement in the simulated effective radius of ice clouds. However, the corresponding ice-cloud optical thickness increased more than is observed. This can be caused by lack of cloud advection and evaporation. We argue that this deficiency can be mitigated by adjusting the other tunable parameters of McRAS-AC such as precipitation efficiency. Inclusion of ice cloud particle splintering introduced through well- established empirical equations is found to further improve the results. Preliminary tests show that these changes make a substantial improvement in simulating the cloud optical properties in the GCM, particularly by simulating a far more realistic cloud distribution over the ITCZ.« less
Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid
2017-12-07
Simple and accurate expressions are presented for the equation of state (EOS) and absolute Helmholtz free energy of a system composed of simple atomic particles interacting through the repulsive Lennard-Jones potential model in the fluid and solid phases. The introduced EOS has 17 and 22 coefficients for fluid and solid phases, respectively, which are regressed to the Monte Carlo (MC) simulation data over the reduced temperature range of 0.6≤T * ≤6.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. The average absolute relative percent deviation in fitting the EOS parameters to the MC data is 0.06 and 0.14 for the fluid and solid phases, respectively. The thermodynamic integration method is used to calculate the free energy using the MC simulation results. The Helmholtz free energy of the ideal gas is employed as the reference state for the fluid phase. For the solid phase, the values of the free energy at the reduced density equivalent to the close-packed of a hard sphere are used as the reference state. To check the validity of the predicted values of the Helmholtz free energy, the Widom particle insertion method and the Einstein crystal technique of Frenkel and Ladd are employed. The results obtained from the MC simulation approaches are well agreed to the EOS results, which show that the proposed model can reliably be utilized in the framework of thermodynamic theories.
NASA Astrophysics Data System (ADS)
Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid
2017-12-01
Simple and accurate expressions are presented for the equation of state (EOS) and absolute Helmholtz free energy of a system composed of simple atomic particles interacting through the repulsive Lennard-Jones potential model in the fluid and solid phases. The introduced EOS has 17 and 22 coefficients for fluid and solid phases, respectively, which are regressed to the Monte Carlo (MC) simulation data over the reduced temperature range of 0.6 ≤T*≤6.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. The average absolute relative percent deviation in fitting the EOS parameters to the MC data is 0.06 and 0.14 for the fluid and solid phases, respectively. The thermodynamic integration method is used to calculate the free energy using the MC simulation results. The Helmholtz free energy of the ideal gas is employed as the reference state for the fluid phase. For the solid phase, the values of the free energy at the reduced density equivalent to the close-packed of a hard sphere are used as the reference state. To check the validity of the predicted values of the Helmholtz free energy, the Widom particle insertion method and the Einstein crystal technique of Frenkel and Ladd are employed. The results obtained from the MC simulation approaches are well agreed to the EOS results, which show that the proposed model can reliably be utilized in the framework of thermodynamic theories.
Simulating Silicon Photomultiplier Response to Scintillation Light
Jha, Abhinav K.; van Dam, Herman T.; Kupinski, Matthew A.; Clarkson, Eric
2015-01-01
The response of a Silicon Photomultiplier (SiPM) to optical signals is affected by many factors including photon-detection efficiency, recovery time, gain, optical crosstalk, afterpulsing, dark count, and detector dead time. Many of these parameters vary with overvoltage and temperature. When used to detect scintillation light, there is a complicated non-linear relationship between the incident light and the response of the SiPM. In this paper, we propose a combined discrete-time discrete-event Monte Carlo (MC) model to simulate SiPM response to scintillation light pulses. Our MC model accounts for all relevant aspects of the SiPM response, some of which were not accounted for in the previous models. We also derive and validate analytic expressions for the single-photoelectron response of the SiPM and the voltage drop across the quenching resistance in the SiPM microcell. These analytic expressions consider the effect of all the circuit elements in the SiPM and accurately simulate the time-variation in overvoltage across the microcells of the SiPM. Consequently, our MC model is able to incorporate the variation of the different SiPM parameters with varying overvoltage. The MC model is compared with measurements on SiPM-based scintillation detectors and with some cases for which the response is known a priori. The model is also used to study the variation in SiPM behavior with SiPM-circuit parameter variations and to predict the response of a SiPM-based detector to various scintillators. PMID:26236040
Sneessens, I; Veysset, P; Benoit, M; Lamadon, A; Brunschwig, G
2016-11-01
Crop-livestock production is claimed more sustainable than specialized production systems. However, the presence of controversial studies suggests that there must be conditions of mixing crop and livestock productions to allow for higher sustainable performances. Whereas previous studies focused on the impact of crop-livestock interactions on performances, we posit here that crop-livestock organization is a key determinant of farming system sustainability. Crop-livestock organization refers to the percentage of the agricultural area that is dedicated to each production. Our objective is to investigate if crop-livestock organization has both a direct and an indirect impact on mixed crop-livestock (MC-L) sustainability. In that objective, we build a whole-farm model parametrized on representative French sheep and crop farming systems in plain areas (Vienne, France). This model permits simulating contrasted MC-L systems and their subsequent sustainability through the following indicators of performance: farm income, production, N balance, greenhouse gas (GHG) emissions (/kg product) and MJ consumption (/kg product). Two MC-L systems were simulated with contrasted crop-livestock organizations (MC20-L80: 20% of crops; MC80-L20: 80% of crops). A first scenario - constraining no crop-livestock interactions in both MC-L systems - permits highlighting that crop-livestock organization has a significant direct impact on performances that implies trade-offs between objectives of sustainability. Indeed, the MC80-L20 system is showing higher performances for farm income (+44%), livestock production (+18%) and crop GHG emissions (-14%) whereas the MC20-L80 system has a better N balance (-53%) and a lower livestock MJ consumption (-9%). A second scenario - allowing for crop-livestock interactions in both MC20-L80 and MC80-L20 systems - stated that crop-livestock organization has a significant indirect impact on performances. Indeed, even if crop-livestock interactions permit improving performances, crop-livestock organization influences the capacity of MC-L systems to benefit from crop-livestock interactions. As a consequence, we observed a decreasing performance trade-off between MC-L systems for farm income (-4%) and crop GHG emissions (-10%) whereas the gap increases for nitrogen balance (+23%), livestock production (+6%) - MJ consumption (+16%) - GHG emissions (+5%) and crop MJ consumption (+5%). However, the indirect impact of crop-livestock organization doesn't reverse the trend of trade-offs between objectives of sustainability determined by the direct impact of crop-livestock organization. As a conclusion, crop-livestock organization is a key factor that has to be taken into account when studying the sustainability of mixed crop-livestock systems.
Destruction of a Magnetized Star
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-01-01
What happens when a magnetized star is torn apart by the tidal forces of a supermassive black hole, in a violent process known as a tidal disruption event? Two scientists have broken new ground by simulating the disruption of stars with magnetic fields for the first time.The magnetic field configuration during a simulation of the partial disruption of a star. Top left: pre-disruption star. Bottom left: matter begins to re-accrete onto the surviving core after the partial disruption. Right: vortices form in the core as high-angular-momentum debris continues to accrete, winding up and amplifying the field. [Adapted from Guillochon McCourt 2017]What About Magnetic Fields?Magnetic fields are expected to exist in the majority of stars. Though these fields dont dominate the energy budget of a star the magnetic pressure is a million times weaker than the gas pressure in the Suns interior, for example they are the drivers of interesting activity, like the prominences and flares of our Sun.Given this, we can wonder what role stars magnetic fields might play when the stars are torn apart in tidal disruption events. Do the fields change what we observe? Are they dispersed during the disruption, or can they be amplified? Might they even be responsible for launching jets of matter from the black hole after the disruption?Star vs. Black HoleIn a recent study, James Guillochon (Harvard-Smithsonian Center for Astrophysics) and Michael McCourt (Hubble Fellow at UC Santa Barbara) have tackled these questions by performing the first simulations of tidal disruptions of stars that include magnetic fields.In their simulations, Guillochon and McCourt evolve a solar-mass star that passes close to a million-solar-mass black hole. Their simulations explore different magnetic field configurations for the star, and they consider both what happens when the star barely grazes the black hole and is only partially disrupted, as well as what happens when the black hole tears the star apart completely.Amplifying EncountersFor stars that survive their encounter with the black hole, Guillochon and McCourt find that the process of partial disruption and re-accretion can amplify the magnetic field of the star by up to a factor of 20. Repeated encounters of the star with the black hole could amplify the field even more.The authors suggest an interesting implication of this idea: a population of highly magnetized stars may have formed in our own galactic center, resulting from their encounters with the supermassive black hole Sgr A*.A turbulent magnetic field forms after a partial stellar disruption and re-accretion of the tidal tails. [Adapted from Guillochon McCourt 2017]Effects in DestructionFor stars that are completely shredded and form a tidal stream after their encounter with the black hole, the authors find that the magnetic field geometry straightens within the stream of debris. There, the pressure of the magnetic field eventually dominates over the gas pressure and self-gravity.Guillochon and McCourt find that the fields new configuration isnt ideal for powering jets from the black hole but it is strong enough to influence how the stream interacts with itself and its surrounding environment, likely affecting what we can expect to see from these short-lived events.These simulations have clearly demonstrated the need to further explore the role of magnetic fields in the disruptions of stars by black holes.BonusCheck out the full (brief) video from one of the simulations by Guillochon and McCourt (be sure to watch it in high-res!). It reveals the evolution of a stars magnetic field configuration as the star is partially disrupted by the forces of a supermassive black hole and then re-accretes.CitationJames Guillochon and Michael McCourt 2017 ApJL 834 L19. doi:10.3847/2041-8213/834/2/L19
NASA Astrophysics Data System (ADS)
Smith, C. G.; Cable, J. E.; Martin, J. B.; Roy, M.
2008-05-01
Pore water distributions of 222Rn (t1/2 = 3.83 d), obtained during two sampling trips 9-12 May 2005 and 6-8 May 2006, are used to determine spatial and temporal variations of fluid discharge from a seepage face located along the mainland shoreline of Indian River Lagoon, Florida. Porewater samples were collected from a 30 m transect of multi-level piezometers and analyzed for 222Rn via liquid scintillation counting; the mean of triplicate measurements was used to represent the porewater 222Rn activities. Sediment samples were collected from five vibracores (0, 10, 17.5, 20, and 30 m offshore) and emanation rates of 222Rn (sediment supported) were determined using a standard cryogenic extraction technique. A conceptual 222Rn transport model and subsequent numerical model were developed based on the vertical distribution of dissolved and sediment-supported 222Rn and applicable processes occurring along the seepage face (e.g. advection, diffusion, and nonlocal exchange). The model was solved inversely with the addition of two Monte Carlo (MC) simulations to increase the statistical reliability of three parameters: fresh groundwater seepage velocity (v), irrigation intensity (α0), and irrigation attenuation (α1). The first MC simulation ensures that the Nelder-Mead minimization algorithm converges on a global minimum of the merit function and that the parameters estimates are consistent within this global minimum. The second MC simulation provides 90% confidence intervals on the parameter estimates using the measured 222Rn activity variance. Fresh groundwater seepage velocities obtained from the model decrease linearly with distance from the shoreline; seepage velocities range between 0.6 and 42.2 cm d-1. Based on this linear relationship, the terminus of the fresh groundwater seepage is approximately 25 m offshore and total fresh groundwater discharge for the May-2005 and May-2006 sampling trips are 1.16 and 1.45 m3 d-1 m-1 of shoreline, respectively. We hypothesize that the 25% increase in specific discharge between May-2005 and May- 2006 reflects higher recharge via precipitation to the Surficial aquifer during the highly active 2005 Atlantic hurricane season. Irrigation rates generally decrease offshore for both sampling periods; irrigation rates range between 4.9 and 85.7 cm d-1. Physical and biological mechanisms reasonable for the observed irrigation likely include density-driven convection, wave pumping, and bio-irrigation. The inclusion of both advective and nonlocal exchange processes in the model permits the separation of submarine groundwater discharge into fresh submarine groundwater discharge (seepage velocities) and (re)circulated lagoon water (as irrigation).
Fiorina, E; Ferrero, V; Pennazio, F; Baroni, G; Battistoni, G; Belcari, N; Cerello, P; Camarlinghi, N; Ciocca, M; Del Guerra, A; Donetti, M; Ferrari, A; Giordanengo, S; Giraudo, G; Mairani, A; Morrocchi, M; Peroni, C; Rivetti, A; Da Rocha Rolo, M D; Rossi, S; Rosso, V; Sala, P; Sportelli, G; Tampellini, S; Valvo, F; Wheadon, R; Bisogni, M G
2018-05-07
Hadrontherapy is a method for treating cancer with very targeted dose distributions and enhanced radiobiological effects. To fully exploit these advantages, in vivo range monitoring systems are required. These devices measure, preferably during the treatment, the secondary radiation generated by the beam-tissue interactions. However, since correlation of the secondary radiation distribution with the dose is not straightforward, Monte Carlo (MC) simulations are very important for treatment quality assessment. The INSIDE project constructed an in-beam PET scanner to detect signals generated by the positron-emitting isotopes resulting from projectile-target fragmentation. In addition, a FLUKA-based simulation tool was developed to predict the corresponding reference PET images using a detailed scanner model. The INSIDE in-beam PET was used to monitor two consecutive proton treatment sessions on a patient at the Italian Center for Oncological Hadrontherapy (CNAO). The reconstructed PET images were updated every 10 s providing a near real-time quality assessment. By half-way through the treatment, the statistics of the measured PET images were already significant enough to be compared with the simulations with average differences in the activity range less than 2.5 mm along the beam direction. Without taking into account any preferential direction, differences within 1 mm were found. In this paper, the INSIDE MC simulation tool is described and the results of the first in vivo agreement evaluation are reported. These results have justified a clinical trial, in which the MC simulation tool will be used on a daily basis to study the compliance tolerances between the measured and simulated PET images. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Ericson, Mark D; Freeman, Katie T; Schnell, Sathya M; Haskell-Luevano, Carrie
2017-01-26
The melanocortin system consists of five receptor subtypes, endogenous agonists, and naturally occurring antagonists. These receptors and ligands have been implicated in numerous biological pathways including processes linked to obesity and food intake. Herein, a truncation structure-activity relationship study of chimeric agouti-related protein (AGRP)/[Nle4,DPhe7]α-melanocyte stimulating hormone (NDP-MSH) ligands is reported. The tetrapeptide His-DPhe-Arg-Trp or tripeptide DPhe-Arg-Trp replaced the Arg-Phe-Phe sequence in the AGRP active loop derivative c[Pro-Arg-Phe-Phe-Xxx-Ala-Phe-DPro], where Xxx was the native Asn of AGRP or a diaminopropionic (Dap) acid residue previously shown to increase antagonist potency at the mMC4R. The Phe, Ala, and Dap/Asn residues were successively removed to generate a 14-member library that was assayed for agonist activity at the mouse MC1R, MC3R, MC4R, and MC5R. Two compounds possessed nanomolar agonist potency at the mMC4R, c[Pro-His-DPhe-Arg-Trp-Asn-Ala-Phe-DPro] and c[Pro-His-DPhe-Arg-Trp-Dap-Ala-DPro], and may be further developed to generate novel melanocortin probes and ligands for understanding and treating obesity.
Monte Carlo calculation of proton stopping power and ranges in water for therapeutic energies
NASA Astrophysics Data System (ADS)
Bozkurt, Ahmet
2017-09-01
Monte Carlo is a statistical technique for obtaining numerical solutions to physical or mathematical problems that are analytically impractical, if not impossible, to solve. For charged particle transport problems, it presents many advantages over deterministic methods since such problems require a realistic description of the problem geometry, as well as detailed tracking of every source particle. Thus, MC can be considered as a powerful alternative to the well-known Bethe-Bloche equation where an equation with various corrections is used to obtain stopping power and ranges of electrons, positrons, protons, alphas, etc. This study presents how a stochastic method such as MC can be utilized to obtain certain quantities of practical importance related to charged particle transport. Sample simulation geometries were formed for water medium where disk shaped thin detectors were employed to compute average values of absorbed dose and flux at specific distances. For each detector cell, these quantities were utilized to evaluate the values of the range and the stopping power, as well as the shape of Bragg curve, for mono-energetic point source pencil beams of protons. The results were found to be ±2% compared to the data from the NIST compilation. It is safe to conclude that this approach can be extended to determine dosimetric quantities for other media, energies and charged particle types.
Astronauts Grissom and Young in Gemini Mission Simulator
1964-05-22
S64-25295 (March 1964) --- Astronauts Virgil I. (Gus) Grissom (right) and John W. Young, prime crew for the first manned Gemini mission (GT-3), are shown inside a Gemini mission simulator at McDonnell Aircraft Corp., St. Louis, MO. The simulator will provide Gemini astronauts and ground crews with realistic mission simulation during intensive training prior to actual launch.
An assessment of 'shuffle algorithm' collision mechanics for particle simulations
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Boyd, Iain D.
1991-01-01
Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.
Simulation of temperature distribution in tumor Photothermal treatment
NASA Astrophysics Data System (ADS)
Zhang, Xiyang; Qiu, Shaoping; Wu, Shulian; Li, Zhifang; Li, Hui
2018-02-01
The light transmission in biological tissue and the optical properties of biological tissue are important research contents of biomedical photonics. It is of great theoretical and practical significance in medical diagnosis and light therapy of disease. In this paper, the temperature feedback-controller was presented for monitoring photothermal treatment in realtime. Two-dimensional Monte Carlo (MC) and diffuse approximation were compared and analyzed. The results demonstrated that diffuse approximation using extrapolated boundary conditions by finite element method is a good approximation to MC simulation. Then in order to minimize thermal damage, real-time temperature monitoring was appraised by proportional-integral-differential (PID) controller in the process of photothermal treatment.
Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.
2016-03-01
The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.
Transient in-plane thermal transport in nanofilms with internal heating
Cao, Bing-Yang
2016-01-01
Wide applications of nanofilms in electronics necessitate an in-depth understanding of nanoscale thermal transport, which significantly deviates from Fourier's law. Great efforts have focused on the effective thermal conductivity under temperature difference, while it is still ambiguous whether the diffusion equation with an effective thermal conductivity can accurately characterize the nanoscale thermal transport with internal heating. In this work, transient in-plane thermal transport in nanofilms with internal heating is studied via Monte Carlo (MC) simulations in comparison to the heat diffusion model and mechanism analyses using Fourier transform. Phonon-boundary scattering leads to larger temperature rise and slower thermal response rate when compared with the heat diffusion model based on Fourier's law. The MC simulations are also compared with the diffusion model with effective thermal conductivity. In the first case of continuous internal heating, the diffusion model with effective thermal conductivity under-predicts the temperature rise by the MC simulations at the initial heating stage, while the deviation between them gradually decreases and vanishes with time. By contrast, for the one-pulse internal heating case, the diffusion model with effective thermal conductivity under-predicts both the peak temperature rise and the cooling rate, so the deviation can always exist. PMID:27118903
Self-Consistent Monte Carlo Study of the Coulomb Interaction under Nano-Scale Device Structures
NASA Astrophysics Data System (ADS)
Sano, Nobuyuki
2011-03-01
It has been pointed that the Coulomb interaction between the electrons is expected to be of crucial importance to predict reliable device characteristics. In particular, the device performance is greatly degraded due to the plasmon excitation represented by dynamical potential fluctuations in high-doped source and drain regions by the channel electrons. We employ the self-consistent 3D Monte Carlo (MC) simulations, which could reproduce both the correct mobility under various electron concentrations and the collective plasma waves, to study the physical impact of dynamical potential fluctuations on device performance under the Double-gate MOSFETs. The average force experienced by an electron due to the Coulomb interaction inside the device is evaluated by performing the self-consistent MC simulations and the fixed-potential MC simulations without the Coulomb interaction. Also, the band-tailing associated with the local potential fluctuations in high-doped source region is quantitatively evaluated and it is found that the band-tailing becomes strongly dependent of position in real space even inside the uniform source region. This work was partially supported by Grants-in-Aid for Scientific Research B (No. 2160160) from the Ministry of Education, Culture, Sports, Science and Technology in Japan.
Transient in-plane thermal transport in nanofilms with internal heating.
Hua, Yu-Chao; Cao, Bing-Yang
2016-02-01
Wide applications of nanofilms in electronics necessitate an in-depth understanding of nanoscale thermal transport, which significantly deviates from Fourier's law. Great efforts have focused on the effective thermal conductivity under temperature difference, while it is still ambiguous whether the diffusion equation with an effective thermal conductivity can accurately characterize the nanoscale thermal transport with internal heating. In this work, transient in-plane thermal transport in nanofilms with internal heating is studied via Monte Carlo (MC) simulations in comparison to the heat diffusion model and mechanism analyses using Fourier transform. Phonon-boundary scattering leads to larger temperature rise and slower thermal response rate when compared with the heat diffusion model based on Fourier's law. The MC simulations are also compared with the diffusion model with effective thermal conductivity. In the first case of continuous internal heating, the diffusion model with effective thermal conductivity under-predicts the temperature rise by the MC simulations at the initial heating stage, while the deviation between them gradually decreases and vanishes with time. By contrast, for the one-pulse internal heating case, the diffusion model with effective thermal conductivity under-predicts both the peak temperature rise and the cooling rate, so the deviation can always exist.
Fermi gases with imaginary mass imbalance and the sign problem in Monte-Carlo calculations
NASA Astrophysics Data System (ADS)
Roscher, Dietrich; Braun, Jens; Chen, Jiunn-Wei; Drut, Joaquín E.
2014-05-01
Fermi gases in strongly coupled regimes are inherently challenging for many-body methods. Although progress has been made analytically, quantitative results require ab initio numerical approaches, such as Monte-Carlo (MC) calculations. However, mass-imbalanced and spin-imbalanced gases are not accessible to MC calculations due to the infamous sign problem. For finite spin imbalance, the problem can be circumvented using imaginary polarizations and analytic continuation, and large parts of the phase diagram then become accessible. We propose to apply this strategy to the mass-imbalanced case, which opens up the possibility to study the associated phase diagram with MC calculations. We perform a first mean-field analysis which suggests that zero-temperature studies, as well as detecting a potential (tri)critical point, are feasible.
Astronaut William S. McArthur in training for contingency EVA in WETF
1993-09-10
S93-43840 (6 Sept 1993) --- Astronaut William S. McArthur, mission specialist, participates in training for contingency Extravehicular Activity (EVA) for the STS-58 mission. For simulation purposes, McArthur was about to be submerged to a point of neutral buoyancy in the Johnson Space Center's (JSC) Weightless Environment Training Facility (WET-F). Though the Spacelab Life Sciences (SLS-2) mission does not include a planned EVA, all crews designate members to learn proper procedures to perform outside the spacecraft in the event of failure of remote means to accomplish those tasks.
NASA Astrophysics Data System (ADS)
Batailly, Alain; Magnain, Benoît; Chevaugeon, Nicolas
2013-05-01
The numerical simulation of contact problems is still a delicate matter especially when large transformations are involved. In that case, relative large slidings can occur between contact surfaces and the discretization error induced by usual finite elements may not be satisfactory. In particular, usual elements lead to a facetization of the contact surface, meaning an unavoidable discontinuity of the normal vector to this surface. Uncertainty over the precision of the results, irregularity of the displacement of the contact nodes and even numerical oscillations of contact reaction force may result of such discontinuity. Among the existing methods for tackling such issue, one may consider mortar elements (Fischer and Wriggers, Comput Methods Appl Mech Eng 195:5020-5036, 2006; McDevitt and Laursen, Int J Numer Methods Eng 48:1525-1547, 2000; Puso and Laursen, Comput Methods Appl Mech Eng 93:601-629, 2004), smoothing of the contact surfaces with additional geometrical entity (B-splines or NURBS) (Belytschko et al., Int J Numer Methods Eng 55:101-125, 2002; Kikuchi, Penalty/finite element approximations of a class of unilateral contact problems. Penalty method and finite element method, ASME, New York, 1982; Legrand, Modèles de prediction de l'interaction rotor/stator dans un moteur d'avion Thèse de doctorat. PhD thesis, École Centrale de Nantes, Nantes, 2005; Muñoz, Comput Methods Appl Mech Eng 197:979-993, 2008; Wriggers and Krstulovic-Opara, J Appl Math Mech (ZAMM) 80:77-80, 2000) and, the use of isogeometric analysis (Temizer et al., Comput Methods Appl Mech Eng 200:1100-1112, 2011; Hughes et al., Comput Methods Appl Mech Eng 194:4135-4195, 2005; de Lorenzis et al., Int J Numer Meth Eng, in press, 2011). In the present paper, we focus on these last two methods which are combined with a finite element code using the bi-potential method for contact management (Feng et al., Comput Mech 36:375-383, 2005). A comparative study focusing on the pros and cons of each method regarding geometrical precision and numerical stability for contact solution is proposed. The scope of this study is limited to 2D contact problems for which we consider several types of finite elements. Test cases are given in order to illustrate this comparative study.
Business Simulations in Financial Management Courses: Implications for Higher Education
ERIC Educational Resources Information Center
Wolmarans, H. P.
2006-01-01
Business simulations provide a teaching method that typically yields (1) more hands-on experience, (2) a higher level of excitement, (3) a higher noise level (and yet a lower incidence of problems), and (4) more commitment than traditional methods of teaching (McLure 1997, 3). Business simulations are experiential learning opportunities that have…
SU-F-T-610: Comparison of Output Factors for Small Radiation Fields Used in SBRT Treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, R; Eldib, A; Li, J
2016-06-15
Purpose: In order to fundamentally understand our previous dose verification results between measurements and calculations from treatment planning system (TPS) for SBRT plans for different sized targets, the goal of the present work was to compare output factors for small fields measured using EDR2 films with TPS and Monet Carlo (MC) simulations. Methods: 6MV beam was delivered to EDR2 films for each of the following field sizes; 1×1 cm{sup 2}, 1.5×1.5 cm{sup 2}, 2×2 cm{sup 2}, 3×3 cm{sup 2}, 4×4 cm{sup 2}, 5×5 cm{sup 2} and 10×10 cm{sup 2}. The films were developed in a film processer, then scanned withmore » a Vidar VXR-16 scanner and analyzed using RIT113 version 6.1. A standard calibration curve was obtained with the 6MV beam and was used to get absolute dose for measured field sizes. Similar plans for all fields sizes mentioned above were generated using Eclipse with the Analytical Anisotropic Algorithm. Similarly, MC simulations were carried out using the MCSIM, an in-house MC code for different field sizes. Output factors normalized to 10×10 cm{sup 2} reference field were calculated for different field sizes in all the three cases and compared. Results: For field sizes ranging from 1×1 cm{sup 2} to 2×2 cm{sup 2}, the differences in output factors between measurements (films), TPS and MC simulations were within 0.22%. For field sizes ranging from 3×3cm{sup 2} to 5×5cm{sup 2}, differences in output factors were within 0.10%. Conclusion: No clinically significant difference was obtained in output factors for different field sizes acquired from films, TPS and MC simulations. Our results showed that the output factors are predicted accurately from TPS when compared to the actual measurements and superior dose calculation Monte Carlo method. This study would help us in understanding our previously obtained dose verification results for small fields used in the SBRT treatment.« less
NASA Astrophysics Data System (ADS)
Aziz Hashikin, Nurul Ab; Yeong, Chai-Hong; Guatelli, Susanna; Jeet Abdullah, Basri Johan; Ng, Kwan-Hoong; Malaroda, Alessandra; Rosenfeld, Anatoly; Perkins, Alan Christopher
2017-09-01
We aimed to investigate the validity of the partition model (PM) in estimating the absorbed doses to liver tumour ({{D}T} ), normal liver tissue ({{D}NL} ) and lungs ({{D}L} ), when cross-fire irradiations between these compartments are being considered. MIRD-5 phantom incorporated with various treatment parameters, i.e. tumour involvement (TI), tumour-to-normal liver uptake ratio (T/N) and lung shunting (LS), were simulated using the Geant4 Monte Carlo (MC) toolkit. 108 track histories were generated for each combination of the three parameters to obtain the absorbed dose per activity uptake in each compartment (DT{{AT}} , DNL{{ANL}} , and DL{{AL}} ). The administered activities, A were estimated using PM, so as to achieve either limiting doses to normal liver, DNLlim or lungs, ~DLlim (70 or 30 Gy, respectively). Using these administered activities, the activity uptake in each compartment ({{A}T} , {{A}NL} , and {{A}L} ) was estimated and multiplied with the absorbed dose per activity uptake attained using the MC simulations, to obtain the actual dose received by each compartment. PM overestimated {{D}L} by 11.7% in all cases, due to the escaped particles from the lungs. {{D}T} and {{D}NL} by MC were largely affected by T/N, which were not considered by PM due to cross-fire exclusion at the tumour-normal liver boundary. These have resulted in the overestimation of {{D}T} by up to 8% and underestimation of {{D}NL} by as high as -78%, by PM. When DNLlim was estimated via PM, the MC simulations showed significantly higher {{D}NL} for cases with higher T/N, and LS ⩽ 10%. All {{D}L} and {{D}T} by MC were overestimated by PM, thus DLlim were never exceeded. PM leads to inaccurate dose estimations due to the exclusion of cross-fire irradiation, i.e. between the tumour and normal liver tissue. Caution should be taken for cases with higher TI and T/N, and lower LS, as they contribute to major underestimation of {{D}NL} . For {{D}L} , a different correction factor for dose calculation may be used for improved accuracy.
NASA Astrophysics Data System (ADS)
Preston, L. A.
2017-12-01
Marine hydrokinetic (MHK) devices offer a clean, renewable alternative energy source for the future. Responsible utilization of MHK devices, however, requires that the effects of acoustic noise produced by these devices on marine life and marine-related human activities be well understood. Paracousti is a 3-D full waveform acoustic modeling suite that can accurately propagate MHK noise signals in the complex bathymetry found in the near-shore to open ocean environment and considers real properties of the seabed, water column, and air-surface interface. However, this is a deterministic simulation that assumes the environment and source are exactly known. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected noise levels within the marine environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. One method is to use Monte Carlo (MC) techniques where simulation results from a large number of deterministic solutions are aggregated to provide statistical properties of the output signal. However, MC methods can be computationally prohibitive since they can require tens of thousands or more simulations to build up an accurate representation of those statistical properties. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a small fraction of the computational cost of MC. We are developing a SPDE solver for the 3-D acoustic wave propagation problem called Paracousti-UQ to help regulators and operators assess the statistical properties of environmental noise produced by MHK devices. In this presentation, we present the SPDE method and compare statistical distributions of simulated acoustic signals in simple models to MC simulations to show the accuracy and efficiency of the SPDE method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
NASA Astrophysics Data System (ADS)
Tarasov, A. P.; Egorov, A. I.; Rogatkin, D. A.
2017-07-01
Using multidetector computed tomography, thicknesses of bone squame and soft tissues of human head were assessed. MC simulation revealed impropriety of source-detector separation distances for 3 oximeters, which can cause extracerebral contamination.
Mixing of Isotactic and Syndiotactic Polypropylenes in the Melt
DOE Office of Scientific and Technical Information (OSTI.GOV)
CLANCY,THOMAS C.; PUTZ,MATHIAS; WEINHOLD,JEFFREY D.
2000-07-14
The miscibility of polypropylene (PP) melts in which the chains differ only in stereochemical composition has been investigated by two different procedures. One approach used detailed local information from a Monte Carlo simulation of a single chain, and the other approach takes this information from a rotational isomeric state model devised decades ago, for another purpose. The first approach uses PRISM theory to deduce the intermolecular packing in the polymer blend, while the second approach uses a Monte Carlo simulation of a coarse-grained representation of independent chains, expressed on a high-coordination lattice. Both approaches find a positive energy change uponmore » mixing isotactic PP (iPP) and syndiotactic polypropylene (sPP) chains in the melt. This conclusion is qualitatively consistent with observations published recently by Muelhaupt and coworkers. The size of the energy chain on mixing is smaller in the MC/PRISM approach than in the RIS/MC simulation, with the smaller energy change being in better agreement with the experiment. The RIS/MC simulation finds no demixing for iPP and atactic polypropylene (aPP) in the melt, consistent with several experimental observations in the literature. The demixing of the iPP/sPP blend may arise from attractive interactions in the sPP melt that are disrupted when the sPP chains are diluted with aPP or iPP chains.« less
Monte Carlo simulations of backscattering process in dislocation-containing SrTiO3 single crystal
NASA Astrophysics Data System (ADS)
Jozwik, P.; Sathish, N.; Nowicki, L.; Jagielski, J.; Turos, A.; Kovarik, L.; Arey, B.
2014-05-01
Studies of defects formation in crystals are of obvious importance in electronics, nuclear engineering and other disciplines where materials are exposed to different forms of irradiation. Rutherford Backscattering/Channeling (RBS/C) and Monte Carlo (MC) simulations are the most convenient tool for this purpose, as they allow one to determine several features of lattice defects: their type, concentration and damage accumulation kinetic. On the other hand various irradiation conditions can be efficiently modeled by ion irradiation method without leading to the radioactivity of the sample. Combination of ion irradiation with channeling experiment and MC simulations appears thus as a most versatile method in studies of radiation damage in materials. The paper presents the results on such a study performed on SrTiO3 (STO) single crystals irradiated with 320 keV Ar ions. The samples were analyzed also by using HRTEM as a complementary method which enables the measurement of geometrical parameters of crystal lattice deformation in the vicinity of dislocations. Once the parameters and their variations within the distance of several lattice constants from the dislocation core are known, they may be used in MC simulations for the quantitative determination of dislocation depth distribution profiles. The final outcome of the deconvolution procedure are cross-sections values calculated for two types of defects observed (RDA and dislocations).
Design of a digital phantom population for myocardial perfusion SPECT imaging research.
Ghaly, Michael; Du, Yong; Fung, George S K; Tsui, Benjamin M W; Links, Jonathan M; Frey, Eric
2014-06-21
Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.
Design of a digital phantom population for myocardial perfusion SPECT imaging research
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Du, Yong; Fung, George S. K.; Tsui, Benjamin M. W.; Links, Jonathan M.; Frey, Eric
2014-06-01
Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altsybeev, Igor
2016-01-22
In the present work, Monte-Carlo toy model with repulsing quark-gluon strings in hadron-hadron collisions is described. String repulsion creates transverse boosts for the string decay products, giving modifications of observables. As an example, long-range correlations between mean transverse momenta of particles in two observation windows are studied in MC toy simulation of the heavy-ion collisions.
Vertical Temperature Simulation of Pegasus Runway, McMurdo Station, Antarctica
2015-01-01
Report Approved for public release; distribution is unlimited. Prepared for National Science Foundation , Division of Polar Programs, Antarctic...45 ERDC/CRREL TR-15-2 vii Preface This study was conducted for the National Science Foundation (NSF), Di- vision of Polar...Development Center GPR Ground-Penetrating Radar MIS McMurdo Ice Self NSF National Science Foundation PIR Precision Infrared Radiometer PLR Division of
SU-E-T-535: Proton Dose Calculations in Homogeneous Media.
Chapman, J; Fontenot, J; Newhauser, W; Hogstrom, K
2012-06-01
To develop a pencil beam dose calculation algorithm for scanned proton beams that improves modeling of scatter events. Our pencil beam algorithm (PBA) was developed for calculating dose from monoenergetic, parallel proton beams in homogeneous media. Fermi-Eyges theory was implemented for pencil beam transport. Elastic and nonelastic scatter effects were each modeled as a Gaussian distribution, with root mean square (RMS) widths determined from theoretical calculations and a nonlinear fit to a Monte Carlo (MC) simulated 1mm × 1mm proton beam, respectively. The PBA was commissioned using MC simulations in a flat water phantom. Resulting PBA calculations were compared with results of other models reported in the literature on the basis of differences between PBA and MC calculations of 80-20% penumbral widths. Our model was further tested by comparing PBA and MC results for oblique beams (45 degree incidence) and surface irregularities (step heights of 1 and 4 cm) for energies of 50-250 MeV and field sizes of 4cm × 4cm and 10cm × 10cm. Agreement between PBA and MC distributions was quantified by computing the percentage of points within 2% dose difference or 1mm distance to agreement. Our PBA improved agreement between calculated and simulated penumbral widths by an order of magnitude compared with previously reported values. For comparisons of oblique beams and surface irregularities, agreement between PBA and MC distributions was better than 99%. Our algorithm showed improved accuracy over other models reported in the literature in predicting the overall shape of the lateral profile through the Bragg peak. This improvement was achieved by incorporating nonelastic scatter events into our PBA. The increased modeling accuracy of our PBA, incorporated into a treatment planning system, may improve the reliability of treatment planning calculations for patient treatments. This research was supported by contract W81XWH-10-1-0005 awarded by The U.S. Army Research Acquisition Activity, 820 Chandler Street, Fort Detrick, MD 21702-5014. This report does not necessarily reflect the position or policy of the Government, and no official endorsement should be inferred. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randeniya, S; Mirkovic, D; Titt, U
2014-06-01
Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Ofmore » the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations. Research supported by National Cancer Institute grant P01CA021239.« less
Geochemical Investigation of Slope Failure on the Northern Cascadia Margin Frontal Ridge
NASA Astrophysics Data System (ADS)
Pohlman, J. W.; Riedel, M.; Waite, W.; Rose, K.; Lapham, L.; Hamilton, T. S.; Enkin, R.; Spence, G. D.; Hyndman, R.; Haacke, R.
2008-12-01
Numerous submarine landslides occur along the seaward side of the northern Cascadia margin's frontal ridge. Bottom simulating reflectors (BSRs) are also prevalent beneath the ridge at a sediment depth (~255 mbsf) coincident with the failure of at least one potentially recent slump. By one scenario, the most recent megathrust earthquake on the northern Cascadia margin, which occurred in 1700 A.D., raised the pore pressure and destabilized gas-charged sediment at the BSR depth. If true, the exposed seafloor within the slide's sole would contain gas-charged, sulfate-free sediment immediately following the slope failure. Over time, sulfate would diffuse into the exposed sediment and re-establish an equilibrium sulfate gradient. In this study, three 1-5 km wide collapse structures and the surrounding areas were cored during the Natural Resources Canada (NRCan) supported cruise PGC0807 to determine if the failures were related to over- pressurized gas and constrain the age of the slumps. Sulfate and methane gradients were measured from cores typically collected along a transect from the headwall scarp, and down to the toe of the slide. Rapidly decreasing sulfate concentrations with depth (a proxy for enhanced methane flux toward the seafloor) above the headwall of Lopez slump confirms a high background flux on the crest of the ridge. However, within the cores we recovered from the headwall, slide sole and slide deposits at all sites investigated, sulfate was abundant, methane was largely absent and, correspondingly, sulfate gradients were relatively low. On the basis of these results, methane was either lost from the system during or since the slope failure, or was never present in the high concentrations expected at an exhumed BSR. Numerical models that simulate sulfate diffusion following the slump-induced pore water profile perturbations will be utilized to constrain the age of the slope failures. Complementary sedimentological and geotechnical studies from the geochemically analyzed cores are ongoing to understand the primary factors that initiate and trigger slope failures along the frontal ridge of the northern Cascadia margin. Shipboard scientific party in alphabetical order: R. Enkin (NRCan), L. Esteban (NRCan), R. Haacke (NRCan), T.S. Hamilton (Camosun College), M. Hogg (Camosun), L. Lapham (Florida State), G. Middleton (NRCan), P. Neelands (NRCan), J. Pohlman (USGS), M. Riedel (McGill), K. Rose (USDOE), A. Schlesinger (UVic), G. Standen (Geoforce), A. Stephenson (UVic), S. Taylor (NRCan), W. Waite (USGS), X. Wang (McGill)
Transport and Thermohaline Structure in the Western Tropical North Pacific
NASA Astrophysics Data System (ADS)
Schonau, Martha Coakley
Transport and thermohaline structure of water masses and their respective variability are observed and modeled in the western tropical North Pacific using autonomous underwater gliders, Argo climatology and a numerical ocean state estimate. The North Equatorial Current (NEC) advects subtropical and subpolar water masses into the region that are transported equatorward by the Mindanao Current (MC). Continuous glider observations of these two currents from June 2009 to December 2013 provide absolute geostrophic velocity, water mass structure, and transport. The observations are compared to Argo climatology (Roemmich and Gilson, 2009), wind and precipitation to assess forcing, and annual and interannual variability. Observations are assimilated into a regional ocean state estimate (1/6°) to examine regional transport variability and its relationship to the El Nino-Southern Oscillation phenomena (ENSO). The NEC, described in Chapter 1, is observed along 134.3°E, from 8.5°N to 16.5°N. NEC thermocline transport is relatively constant, with a variable subthermocline transport that is distinguished by countercurrents centered at 9.6°N and 13.1°N. Correlation between thermocline and subthermocline transport is strong. Isopycnals with subducted water masses, the North Pacific Tropical Water and North Pacific Intermediate Water, have the greatest fine-scale thermohaline variance. The NEC advects water masses into the MC, described in Chapter 2, that flows equatorward along the coast of Mindanao. Gliders observed the MC at a mean latitude of 8.5°N. The Mindanao Undercurrent (MUC) persists in the subthermocline offshore of the MC, with a net poleward transport of intermediate water typical of South Pacific origin. The variable subthermocline transport in the MC/MUC has an inverse linear relationship with the Nino 3.4 index and strongly impacts total transport variability. For each the MC and NEC, surface salinity and thermocline depth have a strong relationship with ENSO, and there is relationship between the fine-scale and large-scale isopycnal thermohaline structure. In Chapter 3, a numerical ocean state estimates shows strong interannual variability of regional transport with ENSO. Prior to mature ENSO events, transport in each the NEC, MC and North Equatorial Counter Current (NECC) increase. The increase is from meridional gradients in isopycnal depth related to interannual wind anomalies.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Dynamic multi-coil tailored excitation for transmit B1 correction at 7 Tesla.
Umesh Rudrapatna, S; Juchem, Christoph; Nixon, Terence W; de Graaf, Robin A
2016-07-01
Tailored excitation (TEx) based on interspersing multiple radio frequency pulses with linear gradient and higher-order shim pulses can be used to obtain uniform flip angle in the presence of large radio frequency transmission (B 1+) inhomogeneity. Here, an implementation of dynamic, multislice tailored excitation using the recently developed multi-coil nonlinear shim hardware (MC-DTEx) is reported. MC-DTEx was developed and tested both in a phantom and in vivo at 7 T, and its efficacy was quantitatively assessed. Predicted outcomes of MC-DTEx and DTEx based on spherical harmonic shims (SH-DTEx) were also compared. For a planned 30 ° flip angle, in a phantom, the standard deviation in excitation improved from 28% (regular excitation) to 12% with MC-DTEx. The SD in in vivo excitation improved from 22 to 12%. The improvements achieved with experimental MC-DTEx closely matched the theoretical predictions. Simulations further showed that MC-DTEx outperforms SH-DTEx for both scenarios. Successful implementation of multislice MC-DTEx is presented and is shown to be capable of homogenizing excitation over more than twofold B 1+ variations. Its benefits over SH-DTEx are also demonstrated. A distinct advantage of MC hardware over SH shim hardware is the absence of significant eddy current effects, which allows for a straightforward, multislice implementation of MC-DTEx. Magn Reson Med 76:83-93, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Furstoss, C; Reniers, B; Bertrand, M J; Poon, E; Carrier, J-F; Keller, B M; Pignol, J P; Beaulieu, L; Verhaegen, F
2009-05-01
A Monte Carlo (MC) study was carried out to evaluate the effects of the interseed attenuation and the tissue composition for two models of 125I low dose rate (LDR) brachytherapy seeds (Medi-Physics 6711, IBt InterSource) in a permanent breast implant. The effect of the tissue composition was investigated because the breast localization presents heterogeneities such as glandular and adipose tissue surrounded by air, lungs, and ribs. The absolute MC dose calculations were benchmarked by comparison to the absolute dose obtained from experimental results. Before modeling a clinical case of an implant in heterogeneous breast, the effects of the tissue composition and the interseed attenuation were studied in homogeneous phantoms. To investigate the tissue composition effect, the dose along the transverse axis of the two seed models were calculated and compared in different materials. For each seed model, three seeds sharing the same transverse axis were simulated to evaluate the interseed effect in water as a function of the distance from the seed. A clinical study of a permanent breast 125I implant for a single patient was carried out using four dose calculation techniques: (1) A TG-43 based calculation, (2) a full MC simulation with realistic tissues and seed models, (3) a MC simulation in water and modeled seeds, and (4) a MC simulation without modeling the seed geometry but with realistic tissues. In the latter, a phase space file corresponding to the particles emitted from the external surface of the seed is used at each seed location. The results were compared by calculating the relevant clinical metrics V85, V100, and V200 for this kind of treatment in the target. D90 and D50 were also determined to evaluate the differences in dose and compare the results to the studies published for permanent prostate seed implants in literature. The experimental results are in agreement with the MC absolute doses (within 5% for EBT Gafchromic film and within 7% for TLD-100). Important differences between the dose along the transverse axis of the seed in water and in adipose tissue are obtained (10% at 3.5 cm). The comparisons between the full MC and the TG-43 calculations show that there are no significant differences for V85 and V100. For V200, 8.4% difference is found coming mainly from the tissue composition effect. Larger differences (about 10.5% for the model 6711 seed and about 13% for the InterSource125) are determined for D90 and D50. These differences depend on the composition of the breast tissue modeled in the simulation. A variation in percentage by mass of the mammary gland and adipose tissue can cause important differences in the clinical dose metrics V200, D90, and D50. Even if the authors can conclude that clinically, the differences in V85, V100, and V200 are acceptable in comparison to the large variation in dose in the treated volume, this work demonstrates that the development of a MC treatment planning system for LDR brachytherapy will improve the dose determination in the treated region and consequently the dose-outcome relationship, especially for the skin toxicity.
Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.
Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood
2016-01-01
Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.
Young surface of Pluto's Sputnik Planitia caused by viscous relaxation
NASA Astrophysics Data System (ADS)
Wei, Q.; Hu, Y.; Liu, Y.; Lin, D. N. C.; Yang, J.; Showman, A. P.
2017-12-01
The young surface of Pluto's Sputnik Planitia (SP) is one of the most prominent features observed by the New Horizon mission (Moore et al., 2016; Stern et al., 2015). No crater has been confirmed on the heart-shaped SP basin, in contrast to more than 5000 identified over comparable areas elsewhere (Robbins et al., 2016). The SP basin is filled with mostly N2 ice and small amount of CH4 and CO ice (Protopapa et al., 2017). Previous studies suggested that the SP surface might be renewed through vigorous thermal convection (McKinnon et al., 2016), and that the surface age may be as young as 500,000 years. In this paper, we present numerical simulations demonstrating that craters can be removed by rapid viscous relaxation of N2 ice over much shorter timescales. The crater retention age is less than 1000 years if the N2-ice thickness is several kilometers. McKinnon, W. B., Nimmo, F., Wong, T., Schenk, P. M., White, O. L., Roberts, J., . . . Umurhan, O. (2016). Convection in a volatile nitrogen-ice-rich layer drives Pluto's geological vigour. Nature, 534(7605), 82-85. Moore, J. M., McKinnon, W. B., Spencer, J. R., Howard, A. D., Schenk, P. M., Beyer, R. A., . . . White, O. L. (2016). The geology of Pluto and Charon through the eyes of New Horizons. Science, 351(6279), 1284-1293. Protopapa, S., Grundy, W. M., Reuter, D. C., Hamilton, D. P., Dalle Ore, C. M., Cook, J. C., . . . Young, L. A. (2017). Pluto's global surface composition through pixel-by-pixel Hapke modeling of New Horizons Ralph/LEISA data. Icarus, Volume 287, 218-228. doi:http://dx.doi.org/10.1016/j.icarus.2016.11.028Robbins, S. J., Singer, K. N., Bray, V. J., Schenk, P., Lauer, T. R., Weaver, H. A., . . . Porter, S. (2016). Craters of the Pluto-Charon system. Icarus. Stern, S. A., Bagenal, F., Ennico, K., Gladstone, G. R., Grundy, W. M., McKinnon, W. B., . . . Zirnstein, E. (2015). The Pluto system: Initial results from its exploration by New Horizons. Science, 350(6258), aad1815.
Amador-Ruiz, Santiago; Gutierrez, David; Martínez-Vizcaíno, Vicente; Gulías-González, Roberto; Pardo-Guijarro, María J; Sánchez-López, Mairena
2018-07-01
Motor competence (MC) affects numerous aspects of children's daily life. The aims of this study were to: evaluate MC, provide population-based percentile values for MC; and determine the prevalence of developmental coordination disorder (DCD) in Spanish schoolchildren. This cross-sectional study included 1562 children aged 4 to 6 years from Castilla-La Mancha, Spain. MC was assessed using the Movement Assessment Battery for Children-Second Edition. Values were analyzed according to age, sex, socioeconomic status (SES), environment (rural/urban), and type of school. Boys scored higher than girls in aiming and catching, whereas girls aged 6 scored higher than boys in balance. Children living in rural areas and those attending to public schools obtained better scores in aiming and catching than those from urban areas and private schools. The prevalence of DCD was 9.9%, and 7.5% of children were at risk of having movement problems. Motor test scores can represent a valuable reference to evaluate and compare the MC in schoolchildren. Schools should identify motor problems at early ages and design initiatives which prevent or mitigate them. © 2018, American School Health Association.
Should adhesive debonding be simulated for intra-radicular post stress analyses?
Caldas, Ricardo A; Bacchi, Atais; Barão, Valentim A R; Versluis, Antheunis
2018-06-23
Elucidate the influence of debonding on stress distribution and maximum stresses for intra-radicular restorations. Five intra-radicular restorations were analyzed by finite element analysis (FEA): MP=metallic cast post core; GP=glass fiber post core; PP=pre-fabricated metallic post core; RE=resin endocrowns; CE=single piece ceramic endocrown. Two cervical preparations were considered: no ferule (f 0 ) and 2mm ferule (f 1 ). The simulation was conducted in three steps: (1) intact bonds at all contacts; (2) bond failure between crown and tooth; (3) bond failure among tooth, post and crown interfaces. Contact friction and separation between interfaces was modeled where bond failure occurred. Mohr-Coulomb stress ratios (σ MC ratio ) and fatigue safety factors (SF) for dentin structure were compared with published strength values, fatigue life, and fracture patterns of teeth with intra-radicular restorations. The σ MC ratio showed no differences among models at first step. The second step increased σ MC ratio at the ferule compared to step 1. At the third step, the σ MC ratio and SF for f 0 models were highly influenced by post material. CE and RE models had the highest values for σ MC ratio and lower SF. MP had the lowest σ MC ratio and higher SF. The f 1 models showed no relevant differences among them at the third step. FEA most closely predicted failure performance of intra-radicular posts when frictional contact was modeled. Results of analyses where all interfaces are assumed to be perfectly bonded should be considered with caution. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.
Featured Image: Mixing Chemicals in Stars
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-10-01
How do stars mix chemicals in their interiors, leading to the abundances we measure at their surfaces? Two scientists from the Planetary Science Institute in Arizona, Tamara Rogers (Newcastle University, UK) and Jim McElwaine (Durham University, UK), have investigated the role that internal gravity waves have in chemical mixing in stellar interiors. Internal gravity waves not to be confused with the currently topical gravitational waves are waves that oscillate within a fluid that has a density gradient. Rogers and McElwaine used simulations to explore how these waves can cause particles in a stars interior to move around, gradually mixing the different chemical elements. Snapshots from four different times in their simulation can be seen below, with the white dots marking tracer particles and the colors indicating vorticity. You can see how the particles move in response to wave motion after the first panel. For more information, check out the paper below!CitationT. M. Rogers and J. N. McElwaine 2017 ApJL 848 L1. doi:10.3847/2041-8213/aa8d13
NASA Astrophysics Data System (ADS)
Bieda, Bogusław; Grzesik, Katarzyna
2017-11-01
The study proposes an stochastic approach based on Monte Carlo (MC) simulation for life cycle assessment (LCA) method limited to life cycle inventory (LCI) study for rare earth elements (REEs) recovery from the secondary materials processes production applied to the New Krankberg Mine in Sweden. The MC method is recognizes as an important tool in science and can be considered the most effective quantification approach for uncertainties. The use of stochastic approach helps to characterize the uncertainties better than deterministic method. Uncertainty of data can be expressed through a definition of probability distribution of that data (e.g. through standard deviation or variance). The data used in this study are obtained from: (i) site-specific measured or calculated data, (ii) values based on literature, (iii) the ecoinvent process "rare earth concentrate, 70% REO, from bastnäsite, at beneficiation". Environmental emissions (e.g, particulates, uranium-238, thorium-232), energy and REE (La, Ce, Nd, Pr, Sm, Dy, Eu, Tb, Y, Sc, Yb, Lu, Tm, Y, Gd) have been inventoried. The study is based on a reference case for the year 2016. The combination of MC analysis with sensitivity analysis is the best solution for quantified the uncertainty in the LCI/LCA. The reliability of LCA results may be uncertain, to a certain degree, but this uncertainty can be noticed with the help of MC method.
The High performance of nanocrystalline CVD diamond coated hip joints in wear simulator test.
Maru, M M; Amaral, M; Rodrigues, S P; Santos, R; Gouvea, C P; Archanjo, B S; Trommer, R M; Oliveira, F J; Silva, R F; Achete, C A
2015-09-01
The superior biotribological performance of nanocrystalline diamond (NCD) coatings grown by a chemical vapor deposition (CVD) method was already shown to demonstrate high wear resistance in ball on plate experiments under physiological liquid lubrication. However, tests with a close-to-real approach were missing and this constitutes the aim of the present work. Hip joint wear simulator tests were performed with cups and heads made of silicon nitride coated with NCD of ~10 μm in thickness. Five million testing cycles (Mc) were run, which represent nearly five years of hip joint implant activity in a patient. For the wear analysis, gravimetry, profilometry, scanning electron microscopy and Raman spectroscopy techniques were used. After 0.5 Mc of wear test, truncation of the protruded regions of the NCD film happened as a result of a fine-scale abrasive wear mechanism, evolving to extensive plateau regions and highly polished surface condition (Ra<10nm). Such surface modification took place without any catastrophic features as cracking, grain pullouts or delamination of the coatings. A steady state volumetric wear rate of 0.02 mm(3)/Mc, equivalent to a linear wear of 0.27 μm/Mc favorably compares with the best performance reported in the literature for the fourth generation alumina ceramic (0.05 mm(3)/Mc). Also, squeaking, quite common phenomenon in hard-on-hard systems, was absent in the present all-NCD system. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard
2007-06-01
The simultaneous use of the Reaction Ensemble Monte Carlo (ReMC) method and the Adaptative Erpenbeck EOS (AE-EOS) method allows us to calculate direclty the thermodynamical and chemical equilibrium of a mixture on the hugoniot curve. The ReMC method allow to reach chemical equilibrium of detonation products and the AE-EOS method constraints ths system to satisfy the Hugoniot relation. Once the Crussard curve of detonation products has been established, CJ state properties may be calculated. An additional NPT simulation is performed at CJ conditions in order to compute derivative thermodynamic quantities like Cp, Cv, Gruneisen gama, sound velocity, and compressibility factor. Several explosives has been studied, of which PETN, nitromethane, tetranitromethane, and hexanitroethane. In these first simulations, solid carbon is eventually treated using an EOS.
A simulation model of IT risk on program trading
NASA Astrophysics Data System (ADS)
Xia, Bingying; Jiang, Wenbao; Luo, Guangxuan
2015-12-01
The biggest difficulty for Program trading IT risk measures lies in the loss of data, in view of this situation, the current scholars approach is collecting court, network and other public media such as all kinds of accident of IT both at home and abroad for data collection, and the loss of IT risk quantitative analysis based on this database. However, the IT risk loss database established by this method can only fuzzy reflect the real situation and not for real to make fundamental explanation. In this paper, based on the study of the concept and steps of the MC simulation, we use computer simulation method, by using the MC simulation method in the "Program trading simulation system" developed by team to simulate the real programming trading and get the IT risk loss of data through its IT failure experiment, at the end of the article, on the effectiveness of the experimental data is verified. In this way, better overcome the deficiency of the traditional research method and solves the problem of lack of IT risk data in quantitative research. More empirically provides researchers with a set of simulation method are used to study the ideas and the process template.
Fiorini, Francesca; Schreuder, Niek; Van den Heuvel, Frank
2018-02-01
Cyclotron-based pencil beam scanning (PBS) proton machines represent nowadays the majority and most affordable choice for proton therapy facilities, however, their representation in Monte Carlo (MC) codes is more complex than passively scattered proton system- or synchrotron-based PBS machines. This is because degraders are used to decrease the energy from the cyclotron maximum energy to the desired energy, resulting in a unique spot size, divergence, and energy spread depending on the amount of degradation. This manuscript outlines a generalized methodology to characterize a cyclotron-based PBS machine in a general-purpose MC code. The code can then be used to generate clinically relevant plans starting from commercial TPS plans. The described beam is produced at the Provision Proton Therapy Center (Knoxville, TN, USA) using a cyclotron-based IBA Proteus Plus equipment. We characterized the Provision beam in the MC FLUKA using the experimental commissioning data. The code was then validated using experimental data in water phantoms for single pencil beams and larger irregular fields. Comparisons with RayStation TPS plans are also presented. Comparisons of experimental, simulated, and planned dose depositions in water plans show that same doses are calculated by both programs inside the target areas, while penumbrae differences are found at the field edges. These differences are lower for the MC, with a γ(3%-3 mm) index never below 95%. Extensive explanations on how MC codes can be adapted to simulate cyclotron-based scanning proton machines are given with the aim of using the MC as a TPS verification tool to check and improve clinical plans. For all the tested cases, we showed that dose differences with experimental data are lower for the MC than TPS, implying that the created FLUKA beam model is better able to describe the experimental beam. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.