A virtual source model for Monte Carlo simulation of helical tomotherapy.
Yuan, Jiankui; Rong, Yi; Chen, Quan
2015-01-08
The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM-based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media.
A virtual source model for Monte Carlo simulation of helical tomotherapy
Yuan, Jiankui; Rong, Yi
2015-01-01
The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase‐space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS‐generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of <1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of <2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM‐based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose‐volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM‐based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media. PACS numbers: 87.53.‐j, 87.55.K‐ PMID:25679157
Borzov, Egor; Daniel, Shahar; Bar‐Deroma, Raquel
2016-01-01
Total skin electron irradiation (TSEI) is a complex technique which requires many nonstandard measurements and dosimetric procedures. The purpose of this work was to validate measured dosimetry data by Monte Carlo (MC) simulations using EGSnrc‐based codes (BEAMnrc and DOSXYZnrc). Our MC simulations consisted of two major steps. In the first step, the incident electron beam parameters (energy spectrum, FWHM, mean angular spread) were adjusted to match the measured data (PDD and profile) at SSD=100 cm for an open field. In the second step, these parameters were used to calculate dose distributions at the treatment distance of 400 cm. MC simulations of dose distributions from single and dual fields at the treatment distance were performed in a water phantom. Dose distribution from the full treatment with six dual fields was simulated in a CT‐based anthropomorphic phantom. MC calculations were compared to the available set of measurements used in clinical practice. For one direct field, MC calculated PDDs agreed within 3%/1 mm with the measurements, and lateral profiles agreed within 3% with the measured data. For the OF, the measured and calculated results were within 2% agreement. The optimal angle of 17° was confirmed for the dual field setup. Dose distribution from the full treatment with six dual fields was simulated in a CT‐based anthropomorphic phantom. The MC‐calculated multiplication factor (B12‐factor), which relates the skin dose for the whole treatment to the dose from one calibration field, for setups with and without degrader was 2.9 and 2.8, respectively. The measured B12‐factor was 2.8 for both setups. The difference between calculated and measured values was within 3.5%. It was found that a degrader provides more homogeneous dose distribution. The measured X‐ray contamination for the full treatment was 0.4%; this is compared to the 0.5% X‐ray contamination obtained with the MC calculation. Feasibility of MC simulation in an anthropomorphic phantom for a full TSEI treatment was proved and is reported for the first time in the literature. The results of our MC calculations were found to be in general agreement with the measurements, providing a promising tool for further studies of dose distribution calculations in TSEI. PACS number(s): 87.10. Rt, 87.55.K, 87.55.ne PMID:27455502
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.
OpenMC In Situ Source Convergence Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee
2016-05-07
We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
Application of the MCNPX-McStas interface for shielding calculations and guide design at ESS
NASA Astrophysics Data System (ADS)
Klinkby, E. B.; Knudsen, E. B.; Willendrup, P. K.; Lauritzen, B.; Nonbøl, E.; Bentley, P.; Filges, U.
2014-07-01
Recently, an interface between the Monte Carlo code MCNPX and the neutron ray-tracing code MCNPX was developed [1, 2]. Based on the expected neutronic performance and guide geometries relevant for the ESS, the combined MCNPX-McStas code is used to calculate dose rates along neutron beam guides. The generation and moderation of neutrons is simulated using a full scale MCNPX model of the ESS target monolith. Upon entering the neutron beam extraction region, the individual neutron states are handed to McStas via the MCNPX-McStas interface. McStas transports the neutrons through the beam guide, and by using newly developed event logging capability, the neutron state parameters corresponding to un-reflected neutrons are recorded at each scattering. This information is handed back to MCNPX where it serves as neutron source input for a second MCNPX simulation. This simulation enables calculation of dose rates in the vicinity of the guide. In addition the logging mechanism is employed to record the scatterings along the guides which is exploited to simulate the supermirror quality requirements (i.e. m-values) needed at different positions along the beam guide to transport neutrons in the same guide/source setup.
Simulation - McCandless, Bruce (Syncom IV)
1985-04-15
S85-30800 (14 April 1985) --- Astronaut Bruce McCandless II tests one of the possible methods of attempting to activate a switch on the Syncom-IV (LEASAT) satellite released April 13 into space from the Space Shuttle Discovery. The communications spacecraft failed to behave properly upon release and NASA officials and satellite experts are considering possible means of repair. McCandless was using a full scale mockup of the satellite in the Johnson Space Center's (JSC) mockup and integration laboratory.
Astronaut William McArthur prepares for a training exercise
1993-07-20
S93-38686 (20 July 1993) --- Wearing a training version of the partial pressure launch and entry garment, astronaut William S. McArthur prepares to rehearse emergency egress procedures for the STS-58 mission. McArthur, along with the five other NASA astronauts and a visiting payload specialist assigned to the seven-member crew, later simulated contingency evacuation procedures. Most of the training session took place in the crew compartment and full fuselage trainers of the Space Shuttle mockup and integration laboratory.
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less
D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc
2011-12-01
Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.
Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R
2014-03-01
Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Drukker, Karen; Hammes-Schiffer, Sharon
1997-07-01
This paper presents an analytical derivation of a multiconfigurational self-consistent-field (MC-SCF) solution of the time-independent Schrödinger equation for nuclear motion (i.e. vibrational modes). This variational MC-SCF method is designed for the mixed quantum/classical molecular dynamics simulation of multiple proton transfer reactions, where the transferring protons are treated quantum mechanically while the remaining degrees of freedom are treated classically. This paper presents a proof that the Hellmann-Feynman forces on the classical degrees of freedom are identical to the exact forces (i.e. the Pulay corrections vanish) when this MC-SCF method is used with an appropriate choice of basis functions. This new MC-SCF method is applied to multiple proton transfer in a protonated chain of three hydrogen-bonded water molecules. The ground state and the first three excited state energies and the ground state forces agree well with full configuration interaction calculations. Sample trajectories are obtained using adiabatic molecular dynamics methods, and nonadiabatic effects are found to be insignificant for these sample trajectories. The accuracy of the excited states will enable this MC-SCF method to be used in conjunction with nonadiabatic molecular dynamics methods. This application differs from previous work in that it is a real-time quantum dynamical nonequilibrium simulation of multiple proton transfer in a chain of water molecules.
Yeo, Sang Chul; Lo, Yu Chieh; Li, Ju; Lee, Hyuck Mo
2014-10-07
Ammonia (NH3) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (Eb) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (Eb) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH3 nitridation rate on the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH3 nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH3 nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH3 nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
MMU development at the Martin Marietta plant in Denver, Colorado
1980-07-25
S80-36889 (24 July 1980) --- Astronaut Bruce McCandless II uses a simulator at Martin Marietta?s space center near Denver to develop flight techniques for a backpack propulsion unit that will be used on Space Shuttle flights. The manned maneuvering unit (MMU) training simulator allows astronauts to "fly missions" against a full scale mockup of a portion of the orbiter vehicle. Controls of the simulator are like those of the actual MMU. Manipulating them allows the astronaut to move in three straight-line directions and in pitch, yaw and roll. One possible application of the MMU is for an extravehicular activity chore to repair damaged tiles on the vehicle. McCandless is wearing an extravehicular mobility unit (EMU).
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
SU-F-T-33: Air-Kerma Strength and Dose Rate Constant by the Full Monte Carlo Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsuji, S; Oita, M; Narihiro, N
2016-06-15
Purpose: In general, the air-kerma strength (Sk) has been determined by the energy weighting the photon energy fluence and the corresponding mass-energy absorption coefficient or mass-energy transfer coefficient. Kerma is an acronym for kinetic energy released per unit mass, defined as the sum of the initial kinetic energies of all the charged particles. Monte Carlo (MC) simulations can investigate the kinetic energy of the charged particles after photo interactions and sum the energy. The Sk of {sup 192}Ir source is obtained in the full MC simulation and finally the dose rate constant Λ is determine. Methods: MC simulations were performedmore » using EGS5 with the microSelectron HDR v2 type of {sup 192}Ir source. The air-kerma rate obtained to sum the electron kinetic energy after photoelectric absorption or Compton scattering for transverse-axis distance from 1 to 120 cm with a 10 m diameter air phantom. Absorbed dose in water is simulated with a 30 cm diameter water phantom. The transport cut-off energy is 10 keV and primary photons from the source need two hundred and forty billion in the air-kerma rate and thirty billion in absorbed dose in water. Results: Sk is multiplied by the square of the distance in air-kerma rate and determined by fitting a linear function. The result of Sk is (2.7039±0.0085)*10-{sup −11} µGy m{sup 2} Bq{sup −1} s{sup −1}. Absorbed dose rate in water at 1 cm transverse-axis distance D(r{sub 0}, θ{sub 0}) is (3.0114±0.0015)*10{sup −11} cGy Bq{sup −1} s{sup −1}. Conclusion: From the results, dose rate constant Λ of the microSelectron HDR v2 type of {sup 192}Ir source is (1.1137±0.0035) cGy h{sup −1} U{sup −1} by the full MC simulations. The consensus value conΛ is (1.109±0.012) cGy h{sup −1} U{sup −1}. The result value is consistent with the consensus data conΛ.« less
Next-generation acceleration and code optimization for light transport in turbid media using GPUs
Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar
2010-01-01
A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeo, Sang Chul; Lee, Hyuck Mo, E-mail: hmlee@kaist.ac.kr; Lo, Yu Chieh
2014-10-07
Ammonia (NH{sub 3}) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (E{sub b}) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (E{sub b}) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH{sub 3} nitridation rate onmore » the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH{sub 3} nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH{sub 3} nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH{sub 3} nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroniger, K; Herzog, M; Landry, G
2015-06-15
Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less
Qin, Nan; Shen, Chenyang; Tsai, Min-Yu; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun
2018-01-01
One of the major benefits of carbon ion therapy is enhanced biological effectiveness at the Bragg peak region. For intensity modulated carbon ion therapy (IMCT), it is desirable to use Monte Carlo (MC) methods to compute the properties of each pencil beam spot for treatment planning, because of their accuracy in modeling physics processes and estimating biological effects. We previously developed goCMC, a graphics processing unit (GPU)-oriented MC engine for carbon ion therapy. The purpose of the present study was to build a biological treatment plan optimization system using goCMC. The repair-misrepair-fixation model was implemented to compute the spatial distribution of linear-quadratic model parameters for each spot. A treatment plan optimization module was developed to minimize the difference between the prescribed and actual biological effect. We used a gradient-based algorithm to solve the optimization problem. The system was embedded in the Varian Eclipse treatment planning system under a client-server architecture to achieve a user-friendly planning environment. We tested the system with a 1-dimensional homogeneous water case and 3 3-dimensional patient cases. Our system generated treatment plans with biological spread-out Bragg peaks covering the targeted regions and sparing critical structures. Using 4 NVidia GTX 1080 GPUs, the total computation time, including spot simulation, optimization, and final dose calculation, was 0.6 hour for the prostate case (8282 spots), 0.2 hour for the pancreas case (3795 spots), and 0.3 hour for the brain case (6724 spots). The computation time was dominated by MC spot simulation. We built a biological treatment plan optimization system for IMCT that performs simulations using a fast MC engine, goCMC. To the best of our knowledge, this is the first time that full MC-based IMCT inverse planning has been achieved in a clinically viable time frame. Copyright © 2017 Elsevier Inc. All rights reserved.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
STS-92 Mission Specialist McArthur has his launch and entry suit adjusted
NASA Technical Reports Server (NTRS)
2000-01-01
During pre-pack and fit check in the Operations and Checkout Building, STS-92 Mission Specialist William S. McArthur Jr. uses a laptop computer while garbed in his full launch and entry suit. McArthur and the rest of the crew are at KSC for Terminal Countdown Demonstration Test activities. The TCDT provides emergency egress training, simulated countdown exercises and opportunities to inspect the mission payload. This mission will be McArthur's third Shuttle flight. STS-92 is scheduled to launch Oct. 5 at 9:38 p.m. EDT from Launch Pad 39A on the fifth flight to the International Space Station. It will carry two elements of the Space Station, the Integrated Truss Structure Z1 and the third Pressurized Mating Adapter. The mission is also the 100th flight in the Shuttle program.
2006-10-01
The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W
SU-G-JeP2-15: Proton Beam Behavior in the Presence of Realistic Magnet Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, D M; Wachowicz, K; Fallone, B G
2016-06-15
Purpose: To investigate the effects of magnetic fields on proton therapy beams for integration with MRI. Methods: 3D magnetic fields from an open-bore superconducting MRI model (previously developed by our group) and 3D magnetic fields from an in-house gradient coil design were applied to various mono energetic proton pencil beam (80MeV to 250MeV) simulations. In all simulations, the z-axis of the simulation geometry coincided with the direction of the B0 field and magnet isocentre. In each simulation, the initial beam trajectory was varied. The first set of simulations performed was based on analytic magnetic force equations (analytic simulations), which couldmore » be rapidly calculated yet were limited to propagating proton beams in vacuum. The second set is full Monte Carlo (MC) simulations, which used GEANT4 MC toolkit. Metrics such as the beam position and dose profiles were extracted. Comparisons between the cases with and without magnetic fields present were made. Results: The analytic simulations served as verification checks for the MC simulations when the same simulation geometries were used. The results of the analytic simulations agreed with the MC simulations performed in vacuum. The presence of the MRI’s static magnetic field causes proton pencil beams to follow a slight helical trajectory when there were some initial off-axis components. The 80MeV, 150MeV, and 250MeV proton beams rotated by 4.9o, 3.6o, and 2.8o, respectively, when they reached z=0cm. The deflections caused by gradient coils’ magnetic fields show spatially invariant patterns with a maximum range of 0.5mm at z=0cm. Conclusion: This investigation reveals that both the MRI’s B0 and gradient magnetic fields can cause small but observable deflections of proton beams at energies studied. The MRI’s static field caused a rotation of the beam while the gradient coils’ fields effects were spatially invariant. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less
Methods for Monte Carlo simulations of biomacromolecules
Vitalis, Andreas; Pappu, Rohit V.
2010-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473
Furstoss, C; Reniers, B; Bertrand, M J; Poon, E; Carrier, J-F; Keller, B M; Pignol, J P; Beaulieu, L; Verhaegen, F
2009-05-01
A Monte Carlo (MC) study was carried out to evaluate the effects of the interseed attenuation and the tissue composition for two models of 125I low dose rate (LDR) brachytherapy seeds (Medi-Physics 6711, IBt InterSource) in a permanent breast implant. The effect of the tissue composition was investigated because the breast localization presents heterogeneities such as glandular and adipose tissue surrounded by air, lungs, and ribs. The absolute MC dose calculations were benchmarked by comparison to the absolute dose obtained from experimental results. Before modeling a clinical case of an implant in heterogeneous breast, the effects of the tissue composition and the interseed attenuation were studied in homogeneous phantoms. To investigate the tissue composition effect, the dose along the transverse axis of the two seed models were calculated and compared in different materials. For each seed model, three seeds sharing the same transverse axis were simulated to evaluate the interseed effect in water as a function of the distance from the seed. A clinical study of a permanent breast 125I implant for a single patient was carried out using four dose calculation techniques: (1) A TG-43 based calculation, (2) a full MC simulation with realistic tissues and seed models, (3) a MC simulation in water and modeled seeds, and (4) a MC simulation without modeling the seed geometry but with realistic tissues. In the latter, a phase space file corresponding to the particles emitted from the external surface of the seed is used at each seed location. The results were compared by calculating the relevant clinical metrics V85, V100, and V200 for this kind of treatment in the target. D90 and D50 were also determined to evaluate the differences in dose and compare the results to the studies published for permanent prostate seed implants in literature. The experimental results are in agreement with the MC absolute doses (within 5% for EBT Gafchromic film and within 7% for TLD-100). Important differences between the dose along the transverse axis of the seed in water and in adipose tissue are obtained (10% at 3.5 cm). The comparisons between the full MC and the TG-43 calculations show that there are no significant differences for V85 and V100. For V200, 8.4% difference is found coming mainly from the tissue composition effect. Larger differences (about 10.5% for the model 6711 seed and about 13% for the InterSource125) are determined for D90 and D50. These differences depend on the composition of the breast tissue modeled in the simulation. A variation in percentage by mass of the mammary gland and adipose tissue can cause important differences in the clinical dose metrics V200, D90, and D50. Even if the authors can conclude that clinically, the differences in V85, V100, and V200 are acceptable in comparison to the large variation in dose in the treated volume, this work demonstrates that the development of a MC treatment planning system for LDR brachytherapy will improve the dose determination in the treated region and consequently the dose-outcome relationship, especially for the skin toxicity.
Morikami, Kenji; Itezono, Yoshiko; Nishimoto, Masahiro; Ohta, Masateru
2014-01-01
Compounds with a medium-sized flexible ring often show atropisomerism that is caused by the high-energy barriers between long-lived conformers that can be isolated and often have different biological properties to each other. In this study, the frequency of the transition between the two stable conformers, aS and aR, of thienotriazolodiazepine compounds with flexible 7-membered rings was estimated computationally by Monte Carlo (MC) simulations and validated experimentally by NMR experiments. To estimate the energy barriers for transitions as precisely as possible, the potential energy (PE) surfaces used in the MC simulations were calculated by molecular orbital (MO) methods. To accomplish the MC simulations with the MO-based PE surfaces in a practical central processing unit (CPU) time, the MO-based PE of each conformer was pre-calculated and stored before the MC simulations, and then only referred to during the MC simulations. The activation energies for transitions calculated by the MC simulations agreed well with the experimental ΔG determined by the NMR experiments. The analysis of the transition trajectories of the MC simulations revealed that the transition occurred not only through the transition states, but also through many different transition paths. Our computational methods gave us quantitative estimates of atropisomerism of the thienotriazolodiazepine compounds in a practical period of time, and the method could be applicable for other slow-dynamics phenomena that cannot be investigated by other atomistic simulations.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
A novel integrated chassis controller for full drive-by-wire vehicles
NASA Astrophysics Data System (ADS)
Song, Pan; Tomizuka, Masayoshi; Zong, Changfu
2015-02-01
In this paper, a systematic design with multiple hierarchical layers is adopted in the integrated chassis controller for full drive-by-wire vehicles. A reference model and the optimal preview acceleration driver model are utilised in the driver control layer to describe and realise the driver's anticipation of the vehicle's handling characteristics, respectively. Both the sliding mode control and terminal sliding mode control techniques are employed in the vehicle motion control (MC) layer to determine the MC efforts such that better tracking performance can be attained. In the tyre force allocation layer, a polygonal simplification method is proposed to deal with the constraints of the tyre adhesive limits efficiently and effectively, whereby the load transfer due to both roll and pitch is also taken into account which directly affects the constraints. By calculating the motor torque and steering angle of each wheel in the executive layer, the total workload of four wheels is minimised during normal driving, whereas the MC efforts are maximised in extreme handling conditions. The proposed controller is validated through simulation to improve vehicle stability and handling performance in both open- and closed-loop manoeuvres.
NASA Technical Reports Server (NTRS)
Blanchard, M.; Bunch, T.; Davis, A.; Kyte, F.; Shade, H.; Erlichman, J.; Polkowski, G.
1977-01-01
During 1976 several penetrators (full and 0.58 scale) were dropped into a test site McCook, Nebraska. The McCook site was selected because it simulated penetration into wind-deposited sediments (silts and sands) on Martian plains. The physical and chemical modifications found in the sediment after the penetrators' impact are described. Laboratory analyses have shown mineralogical and elemental changes are produced in the sediment next to the penetrator. Optical microscopy studies of material next to the skin of the penetrator revealed a layer of glassy material about 75 microns thick. Elemental analysis of a 0-1-mm layer of sediment next to the penetrator revealed increased concentrations for Cr, Fe, Ni, Mo, and reduced concentrations for Mg, Al Si, P, K, and Ca. The Cr, Fe, Ni, and Mo were in fragments abraded from the penetrator. Mineralogical changes occurring in the sediment next to the penetrator included the introduction of micron-size grains of alpha iron and several hydrated iron oxide minerals. The newly formed silicate minerals include metastable phases of silica (cristobalite, lechatelierite, and opal). The glassy material was mostly opal which formed when the host minerals (mica, calcite, and clay) decomposed. In summary, contaminants introduced by the penetrator occur up to 2 mm away from the penetrator's skin. Although volatile elements do migrate and new minerals are formed during the destruction of host minerals in the sediment, no changes were observed beyond the 2-mm distance. The analyses indicate 0.58-scale penetrators do effectively simulate full-scale testing for soil modification effects.
EFFECTS OF ACID RAIN ON APPLE TREE PRODUCTIVITY AND FRUIT QUALITY
Mature 'McIntosh', 'Empire', and 'Golden Delicious' apple trees (Malus domestica) were sprayed with simulated acid rain solutions in the pH range of 2.5 to 5.5 at full bloom in 1980 and 1981. In 1981, weekly sprays were applied at pH 2.75 and pH 3.25. Necrotic lesions developed o...
NASA Astrophysics Data System (ADS)
Schiavi, A.; Senzacqua, M.; Pioli, S.; Mairani, A.; Magro, G.; Molinelli, S.; Ciocca, M.; Battistoni, G.; Patera, V.
2017-09-01
Ion beam therapy is a rapidly growing technique for tumor radiation therapy. Ions allow for a high dose deposition in the tumor region, while sparing the surrounding healthy tissue. For this reason, the highest possible accuracy in the calculation of dose and its spatial distribution is required in treatment planning. On one hand, commonly used treatment planning software solutions adopt a simplified beam-body interaction model by remapping pre-calculated dose distributions into a 3D water-equivalent representation of the patient morphology. On the other hand, Monte Carlo (MC) simulations, which explicitly take into account all the details in the interaction of particles with human tissues, are considered to be the most reliable tool to address the complexity of mixed field irradiation in a heterogeneous environment. However, full MC calculations are not routinely used in clinical practice because they typically demand substantial computational resources. Therefore MC simulations are usually only used to check treatment plans for a restricted number of difficult cases. The advent of general-purpose programming GPU cards prompted the development of trimmed-down MC-based dose engines which can significantly reduce the time needed to recalculate a treatment plan with respect to standard MC codes in CPU hardware. In this work, we report on the development of fred, a new MC simulation platform for treatment planning in ion beam therapy. The code can transport particles through a 3D voxel grid using a class II MC algorithm. Both primary and secondary particles are tracked and their energy deposition is scored along the trajectory. Effective models for particle-medium interaction have been implemented, balancing accuracy in dose deposition with computational cost. Currently, the most refined module is the transport of proton beams in water: single pencil beam dose-depth distributions obtained with fred agree with those produced by standard MC codes within 1-2% of the Bragg peak in the therapeutic energy range. A comparison with measurements taken at the CNAO treatment center shows that the lateral dose tails are reproduced within 2% in the field size factor test up to 20 cm. The tracing kernel can run on GPU hardware, achieving 10 million primary s-1 on a single card. This performance allows one to recalculate a proton treatment plan at 1% of the total particles in just a few minutes.
Interfacing MCNPX and McStas for simulation of neutron transport
NASA Astrophysics Data System (ADS)
Klinkby, Esben; Lauritzen, Bent; Nonbøl, Erik; Kjær Willendrup, Peter; Filges, Uwe; Wohlmuther, Michael; Gallmeier, Franz X.
2013-02-01
Simulations of target-moderator-reflector system at spallation sources are conventionally carried out using Monte Carlo codes such as MCNPX (Waters et al., 2007 [1]) or FLUKA (Battistoni et al., 2007; Ferrari et al., 2005 [2,3]) whereas simulations of neutron transport from the moderator and the instrument response are performed by neutron ray tracing codes such as McStas (Lefmann and Nielsen, 1999; Willendrup et al., 2004, 2011a,b [4-7]). The coupling between the two simulation suites typically consists of providing analytical fits of MCNPX neutron spectra to McStas. This method is generally successful but has limitations, as it e.g. does not allow for re-entry of neutrons into the MCNPX regime. Previous work to resolve such shortcomings includes the introduction of McStas inspired supermirrors in MCNPX. In the present paper different approaches to interface MCNPX and McStas are presented and applied to a simple test case. The direct coupling between MCNPX and McStas allows for more accurate simulations of e.g. complex moderator geometries, backgrounds, interference between beam-lines as well as shielding requirements along the neutron guides.
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. Copyright © 2013 Elsevier B.V. All rights reserved.
A method for photon beam Monte Carlo multileaf collimator particle transport
NASA Astrophysics Data System (ADS)
Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe
2002-09-01
Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A, Popescu I; Lobo, J; Sawkey, D
2014-06-15
Purpose: To simulate and measure radiation backscattered into the monitor chamber of a TrueBeam linac; establish a rigorous framework for absolute dose calculations for TrueBeam Monte Carlo (MC) simulations through a novel approach, taking into account the backscattered radiation and the actual machine output during beam delivery; improve agreement between measured and simulated relative output factors. Methods: The ‘monitor backscatter factor’ is an essential ingredient of a well-established MC absolute dose formalism (the MC equivalent of the TG-51 protocol). This quantity was determined for the 6 MV, 6X FFF, and 10X FFF beams by two independent Methods: (1) MC simulationsmore » in the monitor chamber of the TrueBeam linac; (2) linac-generated beam record data for target current, logged for each beam delivery. Upper head MC simulations used a freelyavailable manufacturer-provided interface to a cloud-based platform, allowing use of the same head model as that used to generate the publicly-available TrueBeam phase spaces, without revealing the upper head design. The MC absolute dose formalism was expanded to allow direct use of target current data. Results: The relation between backscatter, number of electrons incident on the target for one monitor unit, and MC absolute dose was analyzed for open fields, as well as a jaw-tracking VMAT plan. The agreement between the two methods was better than 0.15%. It was demonstrated that the agreement between measured and simulated relative output factors improves across all field sizes when backscatter is taken into account. Conclusion: For the first time, simulated monitor chamber dose and measured target current for an actual TrueBeam linac were incorporated in the MC absolute dose formalism. In conjunction with the use of MC inputs generated from post-delivery trajectory-log files, the present method allows accurate MC dose calculations, without resorting to any of the simplifying assumptions previously made in the TrueBeam MC literature. This work has been partially funded by Varian Medical Systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsline, P.L.; Musselman, R.C.; Kender, W.J.
Mature McIntosh, Empire, and Golden Delicious apple trees (Malus domestica) were sprayed with simulated acid rain solutions in the pH range of 2.5 to 5.5 at full bloom in 1980 and 1981. In 1981, weekly sprays were applied at pH 2.75 and pH 3.25. Necrotic lesions developed on apple petals at pH 2.5 with slight injury appearing at pH 3.0 and 3.5. Apple foliage had no acid rain lesions at any of the pH levels tested. Pollen germination was reduced at pH 2.5 in Empire. Slight fruit set reduction at pH 2.5 was observed in McIntosh. Even at the lowestmore » pH levels no detrimental effects of simulated acid rain were found on apple tree productivity and fruit quality when measured as fruit set, seed number per fruit, and fruit size and appearance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Gao, M
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less
spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains
NASA Astrophysics Data System (ADS)
Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo
2016-09-01
The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, J; Micka, J; Culberson, W
Purpose: To determine the in-air azimuthal anisotropy and in-water dose distribution for the 1 cm length of the CivaString {sup 103}Pd brachytherapy source through measurements and Monte Carlo (MC) simulations. American Association of Physicists in Medicine Task Group No. 43 (TG-43) dosimetry parameters were also determined for this source. Methods: The in-air azimuthal anisotropy of the source was measured with a NaI scintillation detector and simulated with the MCNP5 radiation transport code. Measured and simulated results were normalized to their respective mean values and compared. The TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function for this sourcemore » were determined from LiF:Mg,Ti thermoluminescent dosimeter (TLD) measurements and MC simulations. The impact of {sup 103}Pd well-loading variability on the in-water dose distribution was investigated using MC simulations by comparing the dose distribution for a source model with four wells of equal strength to that for a source model with strengths increased by 1% for two of the four wells. Results: NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy showed that ≥95% of the normalized data were within 1.2% of the mean value. TLD measurements and MC simulations of the TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function agreed to within the experimental TLD uncertainties (k=2). MC simulations showed that a 1% variability in {sup 103}Pd well-loading resulted in changes of <0.1%, <0.1%, and <0.3% in the TG-43 dose-rate constant, radial dose distribution, and polar dose distribution, respectively. Conclusion: The CivaString source has a high degree of azimuthal symmetry as indicated by the NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy. TG-43 dosimetry parameters for this source were determined from TLD measurements and MC simulations. {sup 103}Pd well-loading variability results in minimal variations in the in-water dose distribution according to MC simulations. This work was partially supported by CivaTech Oncology, Inc. through an educational grant for Joshua Reed, John Micka, Wesley Culberson, and Larry DeWerd and through research support for Mark Rivard.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsline, P.L.; Musselman, R.C.; Kender, W.J.
Mature 'McIntosh', 'Empire', and 'Golden Delicious' apple trees (Malus domestica Borkh.) were sprayed with simulated acid rain solutions in the pH range of 2.5 to 5.5 at full bloom in 1980 and in 1981. In 1981, weekly sprays were applied at pH 2.75 and pH 3.25. Necrotic lesions developed on apple petals at pH 2.5 with slight injury appearing at pH 3.0 and pH 3.5. Apple foliage had no acid rain lesions at any of the pH levels tested. Pollen germination was reduced at ph 2.5 in 'Empire'. Slight fruit set reduction at pH 2.5 was observed in 'McIntosh'. Themore » incidence of russetting on 'Golden Delicious' fruits was ameliorated by the presence of rain-exclusion chambers but was not affected by acid rain. With season-long sprays at pH 2.75, there was a slight delay in maturity and lower weight of 'McIntosh' apples. Even at the lowest pH levels no detrimental effects of simulated acid rain were found on apple tree productivity and fruit quality when measured as fruit set, seed number per fruit, and fruit size and appearance.« less
2012-07-01
du monde de la modélisation et de la simulation et lui fournir des directives de mise en œuvre ; et fournir des ...définition ; rapports avec les normes ; spécification de procédure de gestion de la MC ; spécification d’artefact de MC. Considérations importantes...utilisant la présente directive comme référence. • Les VV&A (vérification, validation et acceptation) des MC doivent faire partie intégrante du
NASA Astrophysics Data System (ADS)
Preston, L. A.
2017-12-01
Marine hydrokinetic (MHK) devices offer a clean, renewable alternative energy source for the future. Responsible utilization of MHK devices, however, requires that the effects of acoustic noise produced by these devices on marine life and marine-related human activities be well understood. Paracousti is a 3-D full waveform acoustic modeling suite that can accurately propagate MHK noise signals in the complex bathymetry found in the near-shore to open ocean environment and considers real properties of the seabed, water column, and air-surface interface. However, this is a deterministic simulation that assumes the environment and source are exactly known. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected noise levels within the marine environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. One method is to use Monte Carlo (MC) techniques where simulation results from a large number of deterministic solutions are aggregated to provide statistical properties of the output signal. However, MC methods can be computationally prohibitive since they can require tens of thousands or more simulations to build up an accurate representation of those statistical properties. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a small fraction of the computational cost of MC. We are developing a SPDE solver for the 3-D acoustic wave propagation problem called Paracousti-UQ to help regulators and operators assess the statistical properties of environmental noise produced by MHK devices. In this presentation, we present the SPDE method and compare statistical distributions of simulated acoustic signals in simple models to MC simulations to show the accuracy and efficiency of the SPDE method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Binding, Thermodynamics, and Selectivity of a Non-peptide Antagonist to the Melanocortin-4 Receptor
Saleh, Noureldin; Kleinau, Gunnar; Heyder, Nicolas; Clark, Timothy; Hildebrand, Peter W.; Scheerer, Patrick
2018-01-01
The melanocortin-4 receptor (MC4R) is a potential drug target for treatment of obesity, anxiety, depression, and sexual dysfunction. Crystal structures for MC4R are not yet available, which has hindered successful structure-based drug design. Using microsecond-scale molecular-dynamics simulations, we have investigated selective binding of the non-peptide antagonist MCL0129 to a homology model of human MC4R (hMC4R). This approach revealed that, at the end of a multi-step binding process, MCL0129 spontaneously adopts a binding mode in which it blocks the agonistic-binding site. This binding mode was confirmed in subsequent metadynamics simulations, which gave an affinity for human hMC4R that matches the experimentally determined value. Extending our simulations of MCL0129 binding to hMC1R and hMC3R, we find that receptor subtype selectivity for hMC4R depends on few amino acids located in various structural elements of the receptor. These insights may support rational drug design targeting the melanocortin systems.
Monte Carlo simulations of a low energy proton beamline for radiobiological experiments.
Dahle, Tordis J; Rykkelid, Anne Marit; Stokkevåg, Camilla H; Mairani, Andrea; Görgen, Andreas; Edin, Nina J; Rørvik, Eivind; Fjæra, Lars Fredrik; Malinen, Eirik; Ytre-Hauge, Kristian S
2017-06-01
In order to determine the relative biological effectiveness (RBE) of protons with high accuracy, radiobiological experiments with detailed knowledge of the linear energy transfer (LET) are needed. Cell survival data from high LET protons are sparse and experiments with low energy protons to achieve high LET values are therefore required. The aim of this study was to quantify LET distributions from a low energy proton beam by using Monte Carlo (MC) simulations, and to further compare to a proton beam representing a typical minimum energy available at clinical facilities. A Markus ionization chamber and Gafchromic films were employed in dose measurements in the proton beam at Oslo Cyclotron Laboratory. Dose profiles were also calculated using the FLUKA MC code, with the MC beam parameters optimized based on comparisons with the measurements. LET spectra and dose-averaged LET (LET d ) were then estimated in FLUKA, and compared with LET calculated from an 80 MeV proton beam. The initial proton energy was determined to be 15.5 MeV, with a Gaussian energy distribution of 0.2% full width at half maximum (FWHM) and a Gaussian lateral spread of 2 mm FWHM. The LET d increased with depth, from approximately 5 keV/μm in the entrance to approximately 40 keV/μm in the distal dose fall-off. The LET d values were considerably higher and the LET spectra were much narrower than the corresponding spectra from the 80 MeV beam. MC simulations accurately modeled the dose distribution from the proton beam and could be used to estimate the LET at any position in the setup. The setup can be used to study the RBE for protons at high LET d , which is not achievable in clinical proton therapy facilities.
Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein
2016-01-01
The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. From designed door's thickness, the door designed by the MC simulation and Wu-McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations.
Fast simulation of yttrium-90 bremsstrahlung photons with GATE.
Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan
2010-06-01
Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum. Despite its limited accuracy, the fast bremsstrahlung photon generator is well suited for the simulation of bremsstrahlung photons emitted in large homogeneous organs, such as the liver, and detected in a scintillation camera. The computational acceleration makes it very useful for future investigations of 90Y bremsstrahlung SPECT imaging.
Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework
Dunkerley, David A. P.; Tomkowiak, Michael T.; Slagowski, Jordan M.; McCabe, Bradley P.; Funk, Tobias; Speidel, Michael A.
2015-01-01
Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8–6.4% (18.6–31.5 cm acrylic, 100 kV), versus 2.1–4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems. PMID:26113765
Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework.
Dunkerley, David A P; Tomkowiak, Michael T; Slagowski, Jordan M; McCabe, Bradley P; Funk, Tobias; Speidel, Michael A
2015-02-21
Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8-6.4% (18.6-31.5 cm acrylic, 100 kV), versus 2.1-4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Sahinidis, Nikolaos V.
2013-03-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less
Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations
Radak, Brian K.; Roux, Benoît
2016-10-07
Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less
Integration of OpenMC methods into MAMMOTH and Serpent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie; DeHart, Mark; Tumulak, Aaron
OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakabe, D; Ohno, T; Araki, F
Purpose: The purpose of this study was to evaluate the combined organ dose of digital subtraction angiography (DSA) and computed tomography (CT) using a Monte Carlo (MC) simulation on the abdominal intervention. Methods: The organ doses for DSA and CT were obtained with MC simulation and actual measurements using fluorescent-glass dosimeters at 7 abdominal portions in an Alderson-Rando phantom. DSA was performed from three directions: posterior anterior (PA), right anterior oblique (RAO), and left anterior oblique (LAO). The organ dose with MC simulation was compared with actual radiation dose measurements. Calculations for the MC simulation were carried out with themore » GMctdospp (IMPS, Germany) software based on the EGSnrc MC code. Finally, the combined organ dose for DSA and CT was calculated from the MC simulation using the X-ray conditions of a patient with a diagnosis of hepatocellular carcinoma. Results: For DSA from the PA direction, the organ doses for the actual measurements and MC simulation were 2.2 and 2.4 mGy/100 mAs at the liver, respectively, and 3.0 and 3.1 mGy/100 mAs at the spinal cord, while for CT, the organ doses were 15.2 and 15.1 mGy/100 mAs at the liver, and 14.6 and 13.5 mGy/100 mAs at the spinal cord. The maximum difference in organ dose between the actual measurements and the MC simulation was 11.0% of the spleen at PA, 8.2% of the spinal cord at RAO, and 6.1% of left kidney at LAO with DSA and 9.3% of the stomach with CT. The combined organ dose (4 DSAs and 6 CT scans) with the use of actual patient conditions was found to be 197.4 mGy for the liver and 205.1 mGy for the spinal cord. Conclusion: Our method makes it possible to accurately assess the organ dose to patients for abdominal intervention with combined DSA and CT.« less
NASA Astrophysics Data System (ADS)
Baptista, M.; Di Maria, S.; Vieira, S.; Vaz, P.
2017-11-01
Cone-Beam Computed Tomography (CBCT) enables high-resolution volumetric scanning of the bone and soft tissue anatomy under investigation at the treatment accelerator. This technique is extensively used in Image Guided Radiation Therapy (IGRT) for pre-treatment verification of patient position and target volume localization. When employed daily and several times per patient, CBCT imaging may lead to high cumulative imaging doses to the healthy tissues surrounding the exposed organs. This work aims at (1) evaluating the dose distribution during a CBCT scan and (2) calculating the organ doses involved in this image guiding procedure for clinically available scanning protocols. Both Monte Carlo (MC) simulations and measurements were performed. To model and simulate the kV imaging system mounted on a linear accelerator (Edge™, Varian Medical Systems) the state-of-the-art MC radiation transport program MCNPX 2.7.0 was used. In order to validate the simulation results, measurements of the Computed Tomography Dose Index (CTDI) were performed, using standard PMMA head and body phantoms, with 150 mm length and a standard pencil ionizing chamber (IC) 100 mm long. Measurements for head and pelvis scanning protocols, usually adopted in clinical environment were acquired, using two acquisition modes (full-fan and half fan). To calculate the organ doses, the implemented MC model of the CBCT scanner together with a male voxel phantom ("Golem") was used. The good agreement between the MCNPX simulations and the CTDIw measurements (differences up to 17%) presented in this work reveals that the CBCT MC model was successfully validated, taking into account the several uncertainties. The adequacy of the computational model to map dose distributions during a CBCT scan is discussed in order to identify ways to reduce the total CBCT imaging dose. The organ dose assessment highlights the need to evaluate the therapeutic and the CBCT imaging doses, in a more balanced approach, and the importance of improving awareness regarding the increased risk, arising from repeated exposures.
Predictive process simulation of cryogenic implants for leading edge transistor design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh
2012-11-06
Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less
NASA Astrophysics Data System (ADS)
Zhou, Abel; White, Graeme L.; Davidson, Rob
2018-02-01
Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.
Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein
2016-01-01
Aim The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. Background High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. Materials and methods The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. Results From designed door's thickness, the door designed by the MC simulation and Wu–McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Conclusion Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations. PMID:26900357
Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.
Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P
2018-01-04
Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
John Glenn during preflight training for STS-95
1998-04-14
S98-06949 (28 April 1998) --- U.S. Sen. John H. Glenn Jr. (D.-Ohio), talks with crew trainer Sharon Jones prior to simulating procedures for egressing from a troubled space shuttle. This training mockup is called the full fuselage trainer (FFT). Glenn has been named as a payload specialist for STS-95, scheduled for launch later this year. Photo Credit: Joe McNally, National Geographic, for NASA
Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross
2016-06-01
To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moskvin, V; Pirlepesov, F; Tsiamas, P
Purpose: This study provides an overview of the design and commissioning of the Monte Carlo (MC) model of the spot-scanning proton therapy nozzle and its implementation for the patient plan simulation. Methods: The Hitachi PROBEAT V scanning nozzle was simulated based on vendor specifications using the TOPAS extension of Geant4 code. FLUKA MC simulation was also utilized to provide supporting data for the main simulation. Validation of the MC model was performed using vendor provided data and measurements collected during acceptance/commissioning of the proton therapy machine. Actual patient plans using CT based treatment geometry were simulated and compared to themore » dose distributions produced by the treatment planning system (Varian Eclipse 13.6), and patient quality assurance measurements. In-house MATLAB scripts are used for converting DICOM data into TOPAS input files. Results: Comparison analysis of integrated depth doses (IDDs), therapeutic ranges (R90), and spot shape/sizes at different distances from the isocenter, indicate good agreement between MC and measurements. R90 agreement is within 0.15 mm across all energy tunes. IDDs and spot shapes/sizes differences are within statistical error of simulation (less than 1.5%). The MC simulated data, validated with physical measurements, were used for the commissioning of the treatment planning system. Patient geometry simulations were conducted based on the Eclipse produced DICOM plans. Conclusion: The treatment nozzle and standard option beam model were implemented in the TOPAS framework to simulate a highly conformal discrete spot-scanning proton beam system.« less
Seng, Bunrith; Kaneko, Hidehiro; Hirayama, Kimiaki; Katayama-Hirayama, Keiko
2012-01-01
This paper presents a mathematical model of vertical water movement and a performance evaluation of the model in static pile composting operated with neither air supply nor turning. The vertical moisture content (MC) model was developed with consideration of evaporation (internal and external evaporation), diffusion (liquid and vapour diffusion) and percolation, whereas additional water from substrate decomposition and irrigation was not taken into account. The evaporation term in the model was established on the basis of reference evaporation of the materials at known temperature, MC and relative humidity of the air. Diffusion of water vapour was estimated as functions of relative humidity and temperature, whereas diffusion of liquid water was empirically obtained from experiment by adopting Fick's law. Percolation was estimated by following Darcy's law. The model was applied to a column of composting wood chips with an initial MC of 60%. The simulation program was run for four weeks with calculation span of 1 s. The simulated results were in reasonably good agreement with the experimental results. Only a top layer (less than 20 cm) had a considerable MC reduction; the deeper layers were comparable to the initial MC, and the bottom layer was higher than the initial MC. This model is a useful tool to estimate the MC profile throughout the composting period, and could be incorporated into biodegradation kinetic simulation of composting.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.
Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A
2011-01-01
Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.
MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pater, P; Vallieres, M; Seuntjens, J
2014-06-15
Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dosemore » deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)« less
Chetty, Indrin J; Curran, Bruce; Cygler, Joanna E; DeMarco, John J; Ezzell, Gary; Faddegon, Bruce A; Kawrakow, Iwan; Keall, Paul J; Liu, Helen; Ma, C M Charlie; Rogers, D W O; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V
2007-12-01
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.
McStas 1.1: a tool for building neutron Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.
2000-03-01
McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.
Sung, Wonmo; Park, Jong In; Kim, Jung-in; Carlson, Joel; Ye, Sung-Joon
2017-01-01
This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans. PMID:28493940
Sung, Wonmo; Park, Jong In; Kim, Jung-In; Carlson, Joel; Ye, Sung-Joon; Park, Jong Min
2017-01-01
This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans.
NASA Technical Reports Server (NTRS)
Khambatta, Cyrus F.
2007-01-01
A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.
FF12MC: A revised AMBER forcefield and new protein simulation protocol
2016-01-01
ABSTRACT Specialized to simulate proteins in molecular dynamics (MD) simulations with explicit solvation, FF12MC is a combination of a new protein simulation protocol employing uniformly reduced atomic masses by tenfold and a revised AMBER forcefield FF99 with (i) shortened C—H bonds, (ii) removal of torsions involving a nonperipheral sp3 atom, and (iii) reduced 1–4 interaction scaling factors of torsions ϕ and ψ. This article reports that in multiple, distinct, independent, unrestricted, unbiased, isobaric–isothermal, and classical MD simulations FF12MC can (i) simulate the experimentally observed flipping between left‐ and right‐handed configurations for C14–C38 of BPTI in solution, (ii) autonomously fold chignolin, CLN025, and Trp‐cage with folding times that agree with the experimental values, (iii) simulate subsequent unfolding and refolding of these miniproteins, and (iv) achieve a robust Z score of 1.33 for refining protein models TMR01, TMR04, and TMR07. By comparison, the latest general‐purpose AMBER forcefield FF14SB locks the C14–C38 bond to the right‐handed configuration in solution under the same protein simulation conditions. Statistical survival analysis shows that FF12MC folds chignolin and CLN025 in isobaric–isothermal MD simulations 2–4 times faster than FF14SB under the same protein simulation conditions. These results suggest that FF12MC may be used for protein simulations to study kinetics and thermodynamics of miniprotein folding as well as protein structure and dynamics. Proteins 2016; 84:1490–1516. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27348292
Zhao, Chao; Li, Dawei; Feng, Chuanping; Zhang, Zhenya; Sugiura, Norio; Yang, Yingnan
2015-01-01
A series of advanced WO3-based photocatalysts including CuO/WO3, Pd/WO3, and Pt/WO3 were synthesized for the photocatalytic removal of microcystin-LR (MC-LR) under simulated solar light. In the present study, Pt/WO3 exhibited the best performance for the photocatalytic degradation of MC-LR. The MC-LR degradation can be described by pseudo-first-order kinetic model. Chloride ion (Cl−) with proper concentration could enhance the MC-LR degradation. The presence of metal cations (Cu2+ and Fe3+) improved the photocatalytic degradation of MC-LR. This study suggests that Pt/WO3 photocatalytic oxidation under solar light is a promising option for the purification of water containing MC-LR. PMID:25884038
A novel MC4R deletion coexisting with FTO and MC1R gene variants, causes severe early onset obesity.
Neocleous, Vassos; Shammas, Christos; Phelan, Marie M; Fanis, Pavlos; Pantelidou, Maria; Skordis, Nicos; Mantzoros, Christos; Phylactou, Leonidas A; Toumba, Meropi
2016-07-01
Heterozygous mutations on the melanocortin-4-receptor gene (MC4R) are the most frequent cause of monogenic obesity. We describe a novel MC4R deletion in a girl with severe early onset obesity, tall stature, pale skin and red hair. Clinical and hormonal parameters were evaluated in a girl born full-term by non-consanguineous parents. Her body mass index (BMI) at presentation (3 years) was 30 kg/m 2 (z-score: +4.5SDS). By the age of 5.2 years, she exhibited extreme linear growth acceleration and developed hyperinsulinemia. Direct sequencing of the MC4R, MC1Rand for the knownFTOsingle nucleotide polymorphism (SNP) rs9939609was performed for the patient and her family. A novel heterozygous MC4R p.Met215del (c.643_645delATG) deletion was identified in the patient, her father and her brother, both of whom exhibited a milder phenotype. 3D structural dynamic simulation studies investigated the conformational changes induced by the p.Met215del. The patient and her mother were also found to be carriers of the obesity risk associated FTOrs9939609SNP. Finally, the identification of the known p.Arg160Trp MC1Rvariant in the patient accounts for the red hair and pale skin phenotypic features. The p.Met215del causes global conformational and functional changes as it is localized at the alpha-helical transmembrane regions and the membrane spanning regions of the beta-barrel. This novel mutation produces a severe overgrowth phenotype that is apparent as from infancy and is progressive in childhood. The additional negative effect of environmental and unhealthy lifestyle habits as well as a possible co-interaction of FTOrs9939609 SNP may worsen the phenotype.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A., E-mail: mmalin@wisc.edu, E-mail: ladewerd@wisc.edu
2015-05-15
Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-onlymore » model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in the spectroscopic technique affect the accuracy of Λ{sub spec}. Results: For all sources studied, the angular and spatial distributions of φ{sub full} were more complex than the distributions used in φ{sub spec}. Differences between Λ{sub spec} and Λ{sub full} ranged from −0.6% to +6.4%, confirming the discrepancies found by Rodriguez and Rogers. The largest contribution to the discrepancy was the assumption of isotropic emission in φ{sub spec}, which caused differences in Λ of up to +5.3% relative to Λ{sub full}. Use of the approximated spatial and energy distributions caused smaller average discrepancies in Λ of −0.4% and +0.1%, respectively. The water-only model introduced an average discrepancy in Λ of −0.4%. Conclusions: The approximations used in φ{sub spec} caused discrepancies between Λ{sub approx,i} and Λ{sub full} of up to 7.8%. With the exception of the energy distribution, the approximations used in φ{sub spec} contributed to this discrepancy for all source models studied. To improve the accuracy of Λ{sub spec}, the spatial and angular distributions of φ{sub full} could be measured, with the measurements replacing the approximated distributions. The methodology used in this work could be used to determine the resolution that such measurements would require by computing the dose-rate constants from phase spaces modified to reflect φ{sub full} binned at different spatial and angular resolutions.« less
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1998-01-01
A prognostic cloud scheme named McRAS (Microphysics of clouds with Relaxed Arakawa-Schubert Scheme) was developed with the aim of improving cloud-microphysics, and cloud-radiation interactions in GCMs. McRAS distinguishes convective, stratiform, and boundary-layer clouds. The convective clouds merge into stratiform clouds on an hourly time-scale, while the boundary-layer clouds do so instantly. The cloud condensate transforms into precipitation following the auto-conversion relations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, and diffuse both horizontally and vertically with a fully active cloud-microphysics throughout its life-cycle, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry. An evaluation of McRAS in a single column model (SCM) with the GATE Phase III data has shown that McRAS can simulate the observed temperature, humidity, and precipitation without discernible systematic errors. An evaluation with the ARM-CART SCM data in a cloud model intercomparison exercise shows reasonable but not an outstanding accurate simulation. Such a discrepancy is common to almost all models and is related, in part, to the input data quality. McRAS was implemented in the GEOS II GCM. A 50 month integration that was initialized with the ECMWF analysis of observations for January 1, 1987 and forced with the observed sea-surface temperatures and sea-ice distribution and vegetation properties (biomes, and soils), with prognostic soil moisture, snow-cover, and hydrology showed a very realistic simulation of cloud process, incloud water and ice, and cloud-radiative forcing (CRF). The simulated ITCZ showed a realistic time-mean structure and seasonal cycle, while the simulated CRF showed sensitivity to vertical distribution of cloud water which can be easily altered by the choice of time constant and incloud critical cloud water amount regulators for auto-conversion. The CRF and its feedbacks also have a profound effect on the ITCZ. Even though somewhat weaker than observed, the McRAS-GCM simulation produces robust 30-60 day oscillations in the 200 hPa velocity potential. Two ensembles of 4-summer (July, August, September) simulations, one each for 1987 and 1988 show that the McRAS-GCM simulates realistic and statistically significant precipitation differences over India, Central America, and tropical Africa. Several seasonal simulations were performed with McRAS-GEOS II GCM for the summer (June-July- August) and winter (December-January-February) periods to determine how the simulated clouds and CRFs would be affected by: i) advection of clouds; ii) cloud top entrainment instability, iii) cloud water inhomogeneity correction, and (iv) cloud production and dissipation in different cloud-processes. The results show that each of these processes contributes to the simulated cloud-fraction and CRF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abouelnasr, MKF; Smit, B
2012-01-01
The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) Amore » new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective-diffusion behavior that was reproduced with kMC simulations.« less
Abouelnasr, Mahmoud K F; Smit, Berend
2012-09-07
The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) A new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective- diffusion behavior that was reproduced with kMC simulations.
Development of a polarized neutron beam line at Algerian research reactors using McStas software
NASA Astrophysics Data System (ADS)
Makhloufi, M.; Salah, H.
2017-02-01
Unpolarized instrumentation has long been studied and designed using McStas simulation tool. But, only recently new models were developed for McStas to simulate polarized neutron scattering instruments. In the present contribution, we used McStas software to design a polarized neutron beam line, taking advantage of the available spectrometers reflectometer and diffractometer in Algeria. Both thermal and cold neutron was considered. The polarization was made by two types of supermirrors polarizers FeSi and CoCu provided by the HZB institute. For sake of performance and comparison, the polarizers were characterized and their characteristics reproduced. The simulated instruments are reported. Flipper and electromagnets for guide field are developed. Further developments including analyzers and upgrading of the existing spectrometers are underway.
Comparison of Fluka-2006 Monte Carlo Simulation and Flight Data for the ATIC Detector
NASA Technical Reports Server (NTRS)
Gunasingha, R.M.; Fazely, A.R.; Adams, J.H.; Ahn, H.S.; Bashindzhagyan, G.L.; Chang, J.; Christl, M.; Ganel, O.; Guzik, T.G.; Isbert, J.;
2007-01-01
We have performed a detailed Monte Carlo (MC) simulation for the Advanced Thin Ionization Calorimeter (ATIC) detector using the MC code FLUKA-2006 which is capable of simulating particles up to 10 PeV. The ATIC detector has completed two successful balloon flights from McMurdo, Antarctica lasting a total of more than 35 days. ATIC is designed as a multiple, long duration balloon flight, investigation of the cosmic ray spectra from below 50 GeV to near 100 TeV total energy; using a fully active Bismuth Germanate(BGO) calorimeter. It is equipped with a large mosaic of.silicon detector pixels capable of charge identification, and, for particle tracking, three projective layers of x-y scintillator hodoscopes, located above, in the middle and below a 0.75 nuclear interaction length graphite target. Our simulations are part of an analysis package of both nuclear (A) and energy dependences for different nuclei interacting in the ATIC detector. The MC simulates the response of different components of the detector such as the Si-matrix, the scintillator hodoscopes and the BGO calorimeter to various nuclei. We present comparisons of the FLUKA-2006 MC calculations with GEANT calculations and with the ATIC CERN data and ATIC flight data.
New developments in the McStas neutron instrument simulation package
NASA Astrophysics Data System (ADS)
Willendrup, P. K.; Knudsen, E. B.; Klinkby, E.; Nielsen, T.; Farhi, E.; Filges, U.; Lefmann, K.
2014-07-01
The McStas neutron ray-tracing software package is a versatile tool for building accurate simulators of neutron scattering instruments at reactors, short- and long-pulsed spallation sources such as the European Spallation Source. McStas is extensively used for design and optimization of instruments, virtual experiments, data analysis and user training. McStas was founded as a scientific, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of the main new developments in McStas 2.0 (December 2012) and McStas 2.1 (expected fall 2013), including many new components, component parameter uniformisation, partial loss of backward compatibility, updated source brilliance descriptions, developments toward new tools and user interfaces, web interfaces and a new method for estimating beam losses and background from neutron optics.
Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G
2013-01-01
In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation.
Destruction of a Magnetized Star
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-01-01
What happens when a magnetized star is torn apart by the tidal forces of a supermassive black hole, in a violent process known as a tidal disruption event? Two scientists have broken new ground by simulating the disruption of stars with magnetic fields for the first time.The magnetic field configuration during a simulation of the partial disruption of a star. Top left: pre-disruption star. Bottom left: matter begins to re-accrete onto the surviving core after the partial disruption. Right: vortices form in the core as high-angular-momentum debris continues to accrete, winding up and amplifying the field. [Adapted from Guillochon McCourt 2017]What About Magnetic Fields?Magnetic fields are expected to exist in the majority of stars. Though these fields dont dominate the energy budget of a star the magnetic pressure is a million times weaker than the gas pressure in the Suns interior, for example they are the drivers of interesting activity, like the prominences and flares of our Sun.Given this, we can wonder what role stars magnetic fields might play when the stars are torn apart in tidal disruption events. Do the fields change what we observe? Are they dispersed during the disruption, or can they be amplified? Might they even be responsible for launching jets of matter from the black hole after the disruption?Star vs. Black HoleIn a recent study, James Guillochon (Harvard-Smithsonian Center for Astrophysics) and Michael McCourt (Hubble Fellow at UC Santa Barbara) have tackled these questions by performing the first simulations of tidal disruptions of stars that include magnetic fields.In their simulations, Guillochon and McCourt evolve a solar-mass star that passes close to a million-solar-mass black hole. Their simulations explore different magnetic field configurations for the star, and they consider both what happens when the star barely grazes the black hole and is only partially disrupted, as well as what happens when the black hole tears the star apart completely.Amplifying EncountersFor stars that survive their encounter with the black hole, Guillochon and McCourt find that the process of partial disruption and re-accretion can amplify the magnetic field of the star by up to a factor of 20. Repeated encounters of the star with the black hole could amplify the field even more.The authors suggest an interesting implication of this idea: a population of highly magnetized stars may have formed in our own galactic center, resulting from their encounters with the supermassive black hole Sgr A*.A turbulent magnetic field forms after a partial stellar disruption and re-accretion of the tidal tails. [Adapted from Guillochon McCourt 2017]Effects in DestructionFor stars that are completely shredded and form a tidal stream after their encounter with the black hole, the authors find that the magnetic field geometry straightens within the stream of debris. There, the pressure of the magnetic field eventually dominates over the gas pressure and self-gravity.Guillochon and McCourt find that the fields new configuration isnt ideal for powering jets from the black hole but it is strong enough to influence how the stream interacts with itself and its surrounding environment, likely affecting what we can expect to see from these short-lived events.These simulations have clearly demonstrated the need to further explore the role of magnetic fields in the disruptions of stars by black holes.BonusCheck out the full (brief) video from one of the simulations by Guillochon and McCourt (be sure to watch it in high-res!). It reveals the evolution of a stars magnetic field configuration as the star is partially disrupted by the forces of a supermassive black hole and then re-accretes.CitationJames Guillochon and Michael McCourt 2017 ApJL 834 L19. doi:10.3847/2041-8213/834/2/L19
DOE Office of Scientific and Technical Information (OSTI.GOV)
Häggström, Ida, E-mail: haeggsti@mskcc.org; Beattie, Bradley J.; Schmidtlein, C. Ross
2016-06-15
Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuationmore » are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dPETSTEP can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.« less
Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John
2018-05-01
The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan
2017-07-01
The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
Lu, Zeqin; Jhoja, Jaspreet; Klein, Jackson; Wang, Xu; Liu, Amy; Flueckiger, Jonas; Pond, James; Chrostowski, Lukas
2017-05-01
This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.
NASA Astrophysics Data System (ADS)
Jung, Hyunuk; Shin, Jungsuk; Chung, Kwangzoo; Han, Youngyih; Kim, Jinsung; Choi, Doo Ho
2015-05-01
The aim of this study was to develop an independent dose verification system by using a Monte Carlo (MC) calculation method for intensity modulated radiation therapy (IMRT) conducted by using a Varian Novalis Tx (Varian Medical Systems, Palo Alto, CA, USA) equipped with a highdefinition multi-leaf collimator (HD-120 MLC). The Geant4 framework was used to implement a dose calculation system that accurately predicted the delivered dose. For this purpose, the Novalis Tx Linac head was modeled according to the specifications acquired from the manufacturer. Subsequently, MC simulations were performed by varying the mean energy, energy spread, and electron spot radius to determine optimum values of irradiation with 6-MV X-ray beams by using the Novalis Tx system. Computed percentage depth dose curves (PDDs) and lateral profiles were compared to the measurements obtained by using an ionization chamber (CC13). To validate the IMRT simulation by using the MC model we developed, we calculated a simple IMRT field and compared the result with the EBT3 film measurements in a water-equivalent solid phantom. Clinical cases, such as prostate cancer treatment plans, were then selected, and MC simulations were performed. The accuracy of the simulation was assessed against the EBT3 film measurements by using a gamma-index criterion. The optimal MC model parameters to specify the beam characteristics were a 6.8-MeV mean energy, a 0.5-MeV energy spread, and a 3-mm electron radius. The accuracy of these parameters was determined by comparison of MC simulations with measurements. The PDDs and the lateral profiles of the MC simulation deviated from the measurements by 1% and 2%, respectively, on average. The computed simple MLC fields agreed with the EBT3 measurements with a 95% passing rate with 3%/3-mm gamma-index criterion. Additionally, in applying our model to clinical IMRT plans, we found that the MC calculations and the EBT3 measurements agreed well with a passing rate of greater than 95% on average with a 3%/3-mm gamma-index criterion. In summary, the Novalis Tx Linac head equipped with a HD-120 MLC was successfully modeled by using a Geant4 platform, and the accuracy of the Geant4 platform was successfully validated by comparisons with measurements. The MC model we have developed can be a useful tool for pretreatment quality assurance of IMRT plans and for commissioning of radiotherapy treatment planning.
Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2005-01-01
Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.
NASA Astrophysics Data System (ADS)
Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei
2018-02-01
The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazalova-Carter, Magdalena; Liu, Michael; Palma, Bianey
2015-04-15
Purpose: To measure radiation dose in a water-equivalent medium from very high-energy electron (VHEE) beams and make comparisons to Monte Carlo (MC) simulation results. Methods: Dose in a polystyrene phantom delivered by an experimental VHEE beam line was measured with Gafchromic films for three 50 MeV and two 70 MeV Gaussian beams of 4.0–6.9 mm FWHM and compared to corresponding MC-simulated dose distributions. MC dose in the polystyrene phantom was calculated with the EGSnrc/BEAMnrc and DOSXYZnrc codes based on the experimental setup. Additionally, the effect of 2% beam energy measurement uncertainty and possible non-zero beam angular spread on MC dosemore » distributions was evaluated. Results: MC simulated percentage depth dose (PDD) curves agreed with measurements within 4% for all beam sizes at both 50 and 70 MeV VHEE beams. Central axis PDD at 8 cm depth ranged from 14% to 19% for the 5.4–6.9 mm 50 MeV beams and it ranged from 14% to 18% for the 4.0–4.5 mm 70 MeV beams. MC simulated relative beam profiles of regularly shaped Gaussian beams evaluated at depths of 0.64 to 7.46 cm agreed with measurements to within 5%. A 2% beam energy uncertainty and 0.286° beam angular spread corresponded to a maximum 3.0% and 3.8% difference in depth dose curves of the 50 and 70 MeV electron beams, respectively. Absolute dose differences between MC simulations and film measurements of regularly shaped Gaussian beams were between 10% and 42%. Conclusions: The authors demonstrate that relative dose distributions for VHEE beams of 50–70 MeV can be measured with Gafchromic films and modeled with Monte Carlo simulations to an accuracy of 5%. The reported absolute dose differences likely caused by imperfect beam steering and subsequent charge loss revealed the importance of accurate VHEE beam control and diagnostics.« less
Kim, Sangroh; Yoshizumi, Terry; Toncheva, Greta; Yoo, Sua; Yin, Fang-Fang; Frush, Donald
2010-05-01
To address the lack of accurate dose estimation method in cone beam computed tomography (CBCT), we performed point dose metal oxide semiconductor field-effect transistor (MOSFET) measurements and Monte Carlo (MC) simulations. A Varian On-Board Imager (OBI) was employed to measure point doses in the polymethyl methacrylate (PMMA) CT phantoms with MOSFETs for standard and low dose modes. A MC model of the OBI x-ray tube was developed using BEAMnrc/EGSnrc MC system and validated by the half value layer, x-ray spectrum and lateral and depth dose profiles. We compared the weighted computed tomography dose index (CTDIw) between MOSFET measurements and MC simulations. The CTDIw was found to be 8.39 cGy for the head scan and 4.58 cGy for the body scan from the MOSFET measurements in standard dose mode, and 1.89 cGy for the head and 1.11 cGy for the body in low dose mode, respectively. The CTDIw from MC compared well to the MOSFET measurements within 5% differences. In conclusion, a MC model for Varian CBCT has been established and this approach may be easily extended from the CBCT geometry to multi-detector CT geometry.
Full scattering profile of tissues with elliptical cross sections
NASA Astrophysics Data System (ADS)
Duadi, H.; Feder, I.; Fixler, D.
2018-02-01
Light reflectance and transmission from soft tissue has been utilized in noninvasive clinical measurement devices such as the photoplethysmograph (PPG) and reflectance pulse oximeter. Most methods of near infrared (NIR) spectroscopy focus on the volume reflectance from a semi-infinite sample, while very few measure transmission. However, since PPG and pulse oximetry are usually measured on tissue such as earlobe, fingertip, lip and pinched tissue, we propose examining the full scattering profile (FSP), which is the angular distribution of exiting photons. The FSP provides more comprehensive information when measuring from a cylindrical tissue. In our work we discovered a unique point, that we named the iso-pathlength (IPL) point, which is not dependent on changes in the reduced scattering coefficient (µs'). This IPL point was observed both in Monte Carlo (MC) simulation and in experimental tissue mimicking phantoms. The angle corresponding to this IPL point depends only on the tissue geometry. In the case of cylindrical tissues this point linearly depends on the tissue diameter. Since the target tissues for clinically physiological measuring are not a perfect cylinder, in this work we will examine how the change in the tissue cross section geometry influences the FSP and the IPL point. We used a MC simulation to compare a circular to an elliptic tissue cross section. The IPL point can serve as a self-calibration point for optical tissue measurements such as NIR spectroscopy, PPG and pulse oximetery.
WE-EF-207-05: Monte Carlo Dosimetry for a Dedicated Cone-Beam CT Head Scanner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A; Zbijewski, W; Xu, J
Purpose: Cone-Beam CT (CBCT) is an attractive platform for point-of-care imaging of traumatic brain injury and intracranial hemorrhage. This work implements and evaluates a fast Monte-Carlo (MC) dose estimation engine for development of a dedicated head CBCT scanner, optimization of acquisition protocols, geometry, bowtie filter designs, and patient-specific dosimetry. Methods: Dose scoring with a GPU-based MC CBCT simulator was validated on an imaging bench using a modified 16 cm CTDI phantom with 7 ion chamber shafts along the central ray for 80–100 kVp (+2 mm Al, +0.2 mm Cu). Dose distributions were computed in a segmented CBCT reconstruction of anmore » anthropomorphic head phantom with 4×10{sup 5} tracked photons per scan (5 min runtime). Circular orbits with angular span ranging from short scan (180° + fan angle) to full rotation (360°) were considered for fixed total mAs per scan. Two aluminum filters were investigated: aggressive bowtie, and moderate bowtie (matched to 16 cm and 32 cm water cylinder, respectively). Results: MC dose estimates showed strong agreement with measurements (RMSE<0.001 mGy/mAs). A moderate (aggressive) bowtie reduced the dose, per total mAs, by 20% (30%) at the center of the head, by 40% (50%) at the eye lens, and by 70% (80%) at the posterior skin entrance. For the no bowtie configuration, a short scan reduced the eye lens dose by 62% (from 0.08 mGy/mAs to 0.03 mGy/mAs) compared to full scan, although the dose to spinal bone marrow increased by 40%. For both bowties, the short scan resulted in a similar 40% increase in bone marrow dose, but the reduction in the eye lens was more pronounced: 70% (90%) for the moderate (aggressive) bowtie. Conclusions: Dose maps obtained with validated MC simulation demonstrated dose reduction in sensitive structures (eye lens and bone marrow) through combination of short-scan trajectories and bowtie filters. Xiaohui Wang and David Foos are employees of Carestream Health.« less
Santander, Julian E; Tsapatsis, Michael; Auerbach, Scott M
2013-04-16
We have constructed and applied an algorithm to simulate the behavior of zeolite frameworks during liquid adsorption. We applied this approach to compute the adsorption isotherms of furfural-water and hydroxymethyl furfural (HMF)-water mixtures adsorbing in silicalite zeolite at 300 K for comparison with experimental data. We modeled these adsorption processes under two different statistical mechanical ensembles: the grand canonical (V-Nz-μg-T or GC) ensemble keeping volume fixed, and the P-Nz-μg-T (osmotic) ensemble allowing volume to fluctuate. To optimize accuracy and efficiency, we compared pure Monte Carlo (MC) sampling to hybrid MC-molecular dynamics (MD) simulations. For the external furfural-water and HMF-water phases, we assumed the ideal solution approximation and employed a combination of tabulated data and extended ensemble simulations for computing solvation free energies. We found that MC sampling in the V-Nz-μg-T ensemble (i.e., standard GCMC) does a poor job of reproducing both the Henry's law regime and the saturation loadings of these systems. Hybrid MC-MD sampling of the V-Nz-μg-T ensemble, which includes framework vibrations at fixed total volume, provides better results in the Henry's law region, but this approach still does not reproduce experimental saturation loadings. Pure MC sampling of the osmotic ensemble was found to approach experimental saturation loadings more closely, whereas hybrid MC-MD sampling of the osmotic ensemble quantitatively reproduces such loadings because the MC-MD approach naturally allows for locally anisotropic volume changes wherein some pores expand whereas others contract.
Simulating x-ray telescopes with McXtrace: a case study of ATHENA's optics
NASA Astrophysics Data System (ADS)
Ferreira, Desiree D. M.; Knudsen, Erik B.; Westergaard, Niels J.; Christensen, Finn E.; Massahi, Sonny; Shortt, Brian; Spiga, Daniele; Solstad, Mathias; Lefmann, Kim
2016-07-01
We use the X-ray ray-tracing package McXtrace to simulate the performance of X-ray telescopes based on Silicon Pore Optics (SPO) technologies. We use as reference the design of the optics of the planned X-ray mission Advanced Telescope for High ENergy Astrophysics (ATHENA) which is designed as a single X-ray telescope populated with stacked SPO substrates forming mirror modules to focus X-ray photons. We show that is possible to simulate in detail the SPO pores and qualify the use of McXtrace for in-depth analysis of in-orbit performance and laboratory X-ray test results.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Monte Carlo simulations of neutron-scattering instruments using McStas
NASA Astrophysics Data System (ADS)
Nielsen, K.; Lefmann, K.
2000-06-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations
Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi
2016-01-01
Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775
Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi
2016-01-01
Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Rangaraj, D
2016-06-15
Purpose: Although cone-beam CT (CBCT) imaging became popular in radiation oncology, its imaging dose estimation is still challenging. The goal of this study is to assess the kilovoltage CBCT doses using GMctdospp - an EGSnrc based Monte Carlo (MC) framework. Methods: Two Varian OBI x-ray tube models were implemented in the GMctpdospp framework of EGSnrc MC System. The x-ray spectrum of 125 kVp CBCT beam was acquired from an EGSnrc/BEAMnrc simulation and validated with IPEM report 78. Then, the spectrum was utilized as an input spectrum in GMctdospp dose calculations. Both full and half bowtie pre-filters of the OBI systemmore » were created by using egs-prism module. The x-ray tube MC models were verified by comparing calculated dosimetric profiles (lateral and depth) to ion chamber measurements for a static x-ray beam irradiation to a cuboid water phantom. An abdominal CBCT imaging doses was simulated in GMctdospp framework using a 5-year-old anthropomorphic phantom. The organ doses and effective dose (ED) from the framework were assessed and compared to the MOSFET measurements and convolution/superposition dose calculations. Results: The lateral and depth dose profiles in the water cuboid phantom were well matched within 6% except a few areas - left shoulder of the half bowtie lateral profile and surface of water phantom. The organ doses and ED from the MC framework were found to be closer to MOSFET measurements and CS calculations within 2 cGy and 5 mSv respectively. Conclusion: This study implemented and validated the Varian OBI x-ray tube models in the GMctdospp MC framework using a cuboid water phantom and CBCT imaging doses were also evaluated in a 5-year-old anthropomorphic phantom. In future study, various CBCT imaging protocols will be implemented and validated and consequently patient CT images will be used to estimate the CBCT imaging doses in patients.« less
In-simulator training of driving abilities in a person with a traumatic brain injury.
Gamache, Pierre-Luc; Lavallière, Martin; Tremblay, Mathieu; Simoneau, Martin; Teasdale, Normand
2011-01-01
This study reports the case of a 23-year-old woman (MC) who sustained a severe traumatic brain injury in 2004. After her accident, her driving license was revoked. Despite recovering normal neuropsychological functions in the following years, MC was unable to renew her license, failing four on-road evaluations assessing her fitness to drive. In hope of an eventual license renewal, MC went through an in-simulator training programme in the laboratory in 2009. The training programme aimed at improving features of MC's driving behaviour that were identified as being problematic in prior on-road evaluations. To do so, proper driving behaviour was reinforced via driving-specific feedback provided during the training sessions. After 25 sessions in the simulator (over a period of 4 months), MC significantly improved various components of her driving. Notably, compared to early sessions, later ones were associated with a reduced cognitive load, less jerky speed profiles when stopping at intersections and better vehicle control and positioning. A 1-year retention test showed most of these improvements were consistent. The learning principles underlying well conducted simulator-based education programmes have a strong scientific basis. A simulator training programme like this one represents a promising avenue for driving rehabilitation. It allows individuals without a driving license to practice and improve their skills in a safe and realistic environment.
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation.
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-28
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ϵ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation
NASA Astrophysics Data System (ADS)
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-01
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ɛ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
Enhanced Master Controller Unit Tester
NASA Technical Reports Server (NTRS)
Benson, Patricia; Johnson, Yvette; Johnson, Brian; Williams, Philip; Burton, Geoffrey; McCoy, Anthony
2007-01-01
The Enhanced Master Controller Unit Tester (EMUT) software is a tool for development and testing of software for a master controller (MC) flight computer. The primary function of the EMUT software is to simulate interfaces between the MC computer and external analog and digital circuitry (including other computers) in a rack of equipment to be used in scientific experiments. The simulations span the range of nominal, off-nominal, and erroneous operational conditions, enabling the testing of MC software before all the equipment becomes available.
Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granroth, Garrett E; Chen, Meili; Kohl, James Arthur
2007-01-01
Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersivemore » sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.« less
Mathematical modeling and full-scale shaking table tests for multi-curve buckling restrained braces
NASA Astrophysics Data System (ADS)
Tsai, C. S.; Lin, Yungchang; Chen, Wenshin; Su, H. C.
2009-09-01
Buckling restrained braces (BRBs) have been widely applied in seismic mitigation since they were introduced in the 1970s. However, traditional BRBs have several disadvantages caused by using a steel tube to envelope the mortar to prevent the core plate from buckling, such as: complex interfaces between the materials used, uncertain precision, and time consumption during the manufacturing processes. In this study, a new device called the multi-curve buckling restrained brace (MC-BRB) is proposed to overcome these disadvantages. The new device consists of a core plate with multiple neck portions assembled to form multiple energy dissipation segments, and the enlarged segment, lateral support elements and constraining elements to prevent the BRB from buckling. The enlarged segment located in the middle of the core plate can be welded to the lateral support and constraining elements to increase buckling resistance and to prevent them from sliding during earthquakes. Component tests and a series of shaking table tests on a full-scale steel structure equipped with MC-BRBs were carried out to investigate the behavior and capability of this new BRB design for seismic mitigation. The experimental results illustrate that the MC-BRB possesses a stable mechanical behavior under cyclic loadings and provides good protection to structures during earthquakes. Also, a mathematical model has been developed to simulate the mechanical characteristics of BRBs.
NASA Astrophysics Data System (ADS)
Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles
2017-10-01
RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance = 3%, distance-to-agreement = 3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.
Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles
2017-09-12
RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance = 3%, distance-to-agreement = 3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.
Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation
NASA Astrophysics Data System (ADS)
Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong
2018-02-01
Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics.
Monte Carlo Simulations: Number of Iterations and Accuracy
2015-07-01
iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC
Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang
2010-03-01
The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14
NASA Astrophysics Data System (ADS)
Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.
2000-03-01
Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.
Monte Carlo simulations in radiotherapy dosimetry.
Andreo, Pedro
2018-06-27
The use of the Monte Carlo (MC) method in radiotherapy dosimetry has increased almost exponentially in the last decades. Its widespread use in the field has converted this computer simulation technique in a common tool for reference and treatment planning dosimetry calculations. This work reviews the different MC calculations made on dosimetric quantities, like stopping-power ratios and perturbation correction factors required for reference ionization chamber dosimetry, as well as the fully realistic MC simulations currently available on clinical accelerators, detectors and patient treatment planning. Issues are raised that include the necessity for consistency in the data throughout the entire dosimetry chain in reference dosimetry, and how Bragg-Gray theory breaks down for small photon fields. Both aspects are less critical for MC treatment planning applications, but there are important constraints like tissue characterization and its patient-to-patient variability, which together with the conversion between dose-to-water and dose-to-tissue, are analysed in detail. Although these constraints are common to all methods and algorithms used in different types of treatment planning systems, they make uncertainties involved in MC treatment planning to still remain "uncertain".
Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.
OneSAF as an In-Stride Mission Command Asset
2014-06-01
implementation approach. While DARPA began with a funded project to complete the capability as a “ big bang ” approach the approach here is based on reuse and...Command (MC), Modeling and Simulation (M&S), Distributed Interactive Simulation (DIS) ABSTRACT: To provide greater interoperability and integration...within Mission Command (MC) Systems the One Semi-Automated Forces (OneSAF) entity level simulation is evolving from a tightly coupled client server
Computer Simulation of Electron Thermalization in CsI and CsI(Tl)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhiguo; Xie, YuLong; Cannon, Bret D.
2011-09-15
A Monte Carlo (MC) model was developed and implemented to simulate the thermalization of electrons in inorganic scintillator materials. The model incorporates electron scattering with both longitudinal optical and acoustic phonons. In this paper, the MC model was applied to simulate electron thermalization in CsI, both pure and doped with a range of thallium concentrations. The inclusion of internal electric fields was shown to increase the fraction of recombined electron-hole pairs and to broaden the thermalization distance and thermalization time distributions. The MC simulations indicate that electron thermalization, following {gamma}-ray excitation, takes place within approximately 10 ps in CsI andmore » that electrons can travel distances up to several hundreds of nanometers. Electron thermalization was studied for a range of incident {gamma}-ray energies using electron-hole pair spatial distributions generated by the MC code NWEGRIM (NorthWest Electron and Gamma Ray Interaction in Matter). These simulations revealed that the partition of thermalized electrons between different species (e.g., recombined with self-trapped holes or trapped at thallium sites) vary with the incident energy. Implications for the phenomenon of nonlinearity in scintillator light yield are discussed.« less
Chen, Yunjie; Roux, Benoît
2014-09-21
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
NASA Astrophysics Data System (ADS)
Chen, Yunjie; Roux, Benoît
2014-09-01
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
Kinetic Monte Carlo (kMC) simulation of carbon co-implant on pre-amorphization process.
Park, Soonyeol; Cho, Bumgoo; Yang, Seungsu; Won, Taeyoung
2010-05-01
We report our kinetic Monte Carlo (kMC) study of the effect of carbon co-implant on the pre-amorphization implant (PAL) process. We employed BCA (Binary Collision Approximation) approach for the acquisition of the initial as-implant dopant profile and kMC method for the simulation of diffusion process during the annealing process. The simulation results implied that carbon co-implant suppresses the boron diffusion due to the recombination with interstitials. Also, we could compare the boron diffusion with carbon diffusion by calculating carbon reaction with interstitial. And we can find that boron diffusion is affected from the carbon co-implant energy by enhancing the trapping of interstitial between boron and interstitial.
van Oostrum, Jeroen M; Van Houdenhoven, Mark; Vrielink, Manon M J; Klein, Jan; Hans, Erwin W; Klimek, Markus; Wullink, Gerhard; Steyerberg, Ewout W; Kazemier, Geert
2008-11-01
Hospitals that perform emergency surgery during the night (e.g., from 11:00 pm to 7:30 am) face decisions on optimal operating room (OR) staffing. Emergency patients need to be operated on within a predefined safety window to decrease morbidity and improve their chances of full recovery. We developed a process to determine the optimal OR team composition during the night, such that staffing costs are minimized, while providing adequate resources to start surgery within the safety interval. A discrete event simulation in combination with modeling of safety intervals was applied. Emergency surgery was allowed to be postponed safely. The model was tested using data from the main OR of Erasmus University Medical Center (Erasmus MC). Two outcome measures were calculated: violation of safety intervals and frequency with which OR and anesthesia nurses were called in from home. We used the following input data from Erasmus MC to estimate distributions of all relevant parameters in our model: arrival times of emergency patients, durations of surgical cases, length of stay in the postanesthesia care unit, and transportation times. In addition, surgeons and OR staff of Erasmus MC specified safety intervals. Reducing in-house team members from 9 to 5 increased the fraction of patients treated too late by 2.5% as compared to the baseline scenario. Substantially more OR and anesthesia nurses were called in from home when needed. The use of safety intervals benefits OR management during nights. Modeling of safety intervals substantially influences the number of emergency patients treated on time. Our case study showed that by modeling safety intervals and applying computer simulation, an OR can reduce its staff on call without jeopardizing patient safety.
Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo
2013-01-01
Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson’s equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both of the mean-field theory and MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling. PMID:22680474
Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo
2012-04-01
Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect-included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson's equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both the mean-field theory and the MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling.
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
Hagos, Samson M.; Zhang, Chidong; Feng, Zhe; ...
2016-09-19
Influences of the diurnal cycle of convection on the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC) are examined using cloud-permitting regional model simulations and observations. A pair of ensembles of control (CONTROL) and no-diurnal cycle (NODC) simulations of the November 2011 MJO episode are performed. In the CONTROL simulations, the MJO signal is weakened as it propagates across the MC, with much of the convection stalling over the large islands of Sumatra and Borneo. In the NODC simulations, where the incoming shortwave radiation at the top of the atmosphere is maintained at its daily mean value,more » the MJO signal propagating across the MC is enhanced. Examination of the surface energy fluxes in the simulations indicates that in the presence of the diurnal cycle, surface downwelling shortwave radiation in CONTROL simulations is larger because clouds preferentially form in the afternoon. Furthermore, the diurnal co-variability of surface wind speed and skin temperature results in a larger sensible heat flux and a cooler land surface in CONTROL compared to NODC simulations. Here, an analysis of observations indicates that the modulation of the downwelling shortwave radiation at the surface by the diurnal cycle of cloudiness negatively projects on the MJO intraseasonal cycle and therefore disrupts the propagation of the MJO across the MC.« less
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.
Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe
2015-08-07
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Tian, Z
Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less
A Coarse Grained Model for Methylcellulose: Spontaneous Ring Formation at Elevated Temperature
NASA Astrophysics Data System (ADS)
Huang, Wenjun; Larson, Ronald
Methylcellulose (MC) is widely used as food additives and pharma applications, where its thermo-reversible gelation behavior plays an important role. To date the gelation mechanism is not well understood, and therefore attracts great research interest. In this study, we adopted coarse-grained (CG) molecular dynamics simulations to model the MC chains, including the homopolymers and random copolymers that models commercial METHOCEL A, in an implicit water environment, where each MC monomer modeled with a single bead. The simulations are carried using a LAMMPS program. We parameterized our CG model using the radial distribution functions from atomistic simulations of short MC oligomers, extrapolating the results to long chains. We used dissociation free energy to validate our CG model against the atomistic model. The CG model captured the effects of monomer substitution type and temperature from the atomistic simulations. We applied this CG model to simulate single chains up to 1000 monomers long and obtained persistence lengths that are close to those determined from experiment. We observed the chain collapse transition for random copolymer at 600 monomers long at 50C. The chain collapsed into a stable ring structure with outer diameter around 14nm, which appears to be a precursor to the fibril structure observed in the methylcellulose gel observed by Lodge et al. in the recent studies. Our CG model can be extended to other MC derivatives for studying the interaction between these polymers and small molecules, such as hydrophobic drugs.
Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector
NASA Astrophysics Data System (ADS)
Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2014-02-01
A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.
Concepts for dose determination in flat-detector CT
NASA Astrophysics Data System (ADS)
Kyriakou, Yiannis; Deak, Paul; Langner, Oliver; Kalender, Willi A.
2008-07-01
Flat-detector computed tomography (FD-CT) scanners provide large irradiation fields of typically 200 mm in the cranio-caudal direction. In consequence, dose assessment according to the current definition of the computed tomography dose index CTDIL=100 mm, where L is the integration length, would demand larger ionization chambers and phantoms which do not appear practical. We investigated the usefulness of the CTDI concept and practical dosimetry approaches for FD-CT by measurements and Monte Carlo (MC) simulations. An MC simulation tool (ImpactMC, VAMP GmbH, Erlangen, Germany) was used to assess the dose characteristics and was calibrated with measurements of air kerma. For validation purposes measurements were performed on an Axiom Artis C-arm system (Siemens Medical Solutions, Forchheim, Germany) equipped with a flat detector of 40 cm × 30 cm. The dose was assessed for 70 kV and 125 kV in cylindrical PMMA phantoms of 160 mm and 320 mm diameter with a varying phantom length from 150 to 900 mm. MC simulation results were compared to the values obtained with a calibrated ionization chambers of 100 mm and 250 mm length and to thermoluminesence (TLD) dose profiles. The MCs simulations were used to calculate the efficiency of the CTDIL determination with respect to the desired CTDI∞. Both the MC simulation results and the dose distributions obtained by MC simulation were in very good agreement with the CTDI measurements and with the reference TLD profiles, respectively, to within 5%. Standard CTDI phantoms which have a z-extent of 150 mm underestimate the dose at the center by up to 55%, whereas a z-extent of >=600 mm appears to be sufficient for FD-CT; the baseline value of the respective profile was within 1% to the reference baseline. As expected, the measurements with ionization chambers of 100 mm and 250 mm offer a limited accuracy, whereas an increased integration length of >=600 mm appeared to be necessary to approximate CTDI∞ in within 1%. MC simulations appear to offer a practical and accurate way of assessing conversion factors for arbitrary dosimetry setups using a standard pencil chamber to provide estimates of CTDI∞. This would eliminate the need for extra-long phantoms and ionization chambers or excessive amounts of TLDs.
Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Threemore » of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.« less
Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation.
Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong
2018-02-01
Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
The Monte Carlo simulation of the Borexino detector
NASA Astrophysics Data System (ADS)
Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.
2018-01-01
We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.
Solar proton exposure of an ICRU sphere within a complex structure Part I: Combinatorial geometry.
Wilson, John W; Slaba, Tony C; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A
2016-06-01
The 3DHZETRN code, with improved neutron and light ion (Z≤2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency. Published by Elsevier Ltd.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
NASA Astrophysics Data System (ADS)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-07
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.
NASA Astrophysics Data System (ADS)
Korayem, A. H.; Abdi, M.; Korayem, M. H.
2018-06-01
The surface topography in nanoscale is one of the most important applications of AFM. The analysis of piezoelectric microcantilevers vibration behavior is essential to improve the AFM performance. To this end, one of the appropriate methods to simulate the dynamic behavior of microcantilever (MC) is a numerical solution with FEM in the 3D modeling using COMSOL software. The present study aims to simulate different geometries of the four-layered AFM piezoelectric MCs in 2D and 3D modeling in a liquid medium using COMSOL software. The 3D simulation was done in a spherical container using FSI domain in COMSOL. In 2D modeling by applying Hamilton's Principle based on Euler-Bernoulli Beam theory, the governing motion equation was derived and discretized with FEM. In this mode, the hydrodynamic force was assumed with a string of spheres. The effect of this force along with the squeezed-film force was considered on MC equations. The effect of fluid density and viscosity on the MC vibrations that immersed in different glycerin solutions was investigated in 2D and 3D modes and the results were compared with the experimental results. The frequencies and time responses of MC close to the surface were obtained considering tip-sample forces. The surface topography of MCs different geometries were compared in the liquid medium and the comparison was done in both tapping and non-contact mode. Various types of surface roughness were considered in the topography for MC different geometries. Also, the effect of geometric dimensions on the surface topography was investigated. In liquid medium, MC is installed at an oblique position to avoid damaging the MC due to the squeezed-film force in the vicinity of MC surface. Finally, the effect of MC's angle on surface topography and time response of the system was investigated.
Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul
2014-01-01
As part of an ongoing effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin, analyses and simulations of the hydrology of the Edisto River Basin were made using the topography-based hydrological model (TOPMODEL). A primary focus of the investigation was to assess the potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River Basin. Scaling up was done in a step-wise manner, beginning with applying the calibration parameters, meteorological data, and topographic-wetness-index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made for subsequent simulations, culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River Basin and updated calibration parameters for some of the TOPMODEL calibration parameters. The scaling-up process resulted in nine simulations being made. Simulation 7 best matched the streamflows at station 02175000, Edisto River near Givhans, SC, which was the downstream limit for the TOPMODEL setup, and was obtained by adjusting the scaling factor, including streamflow routing, and using NEXRAD precipitation data for the Edisto River Basin. The Nash-Sutcliffe coefficient of model-fit efficiency and Pearson’s correlation coefficient for simulation 7 were 0.78 and 0.89, respectively. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the McTier Creek and Edisto River models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the substantial difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variable in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD–H, and LOADEST. Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek Basin were used in the water-quality load models.
McStas 1.7 - a new version of the flexible Monte Carlo neutron scattering package
NASA Astrophysics Data System (ADS)
Willendrup, Peter; Farhi, Emmanuel; Lefmann, Kim
2004-07-01
Current neutron instrumentation is both complex and expensive, and accurate simulation has become essential both for building new instruments and for using them effectively. The McStas neutron ray-trace simulation package is a versatile tool for producing such simulations, developed in collaboration between Risø and ILL. The new version (1.7) has many improvements, among these added support for the popular Microsoft Windows platform. This presentation will demonstrate a selection of the new features through a simulation of the ILL IN6 beamline.
NASA Astrophysics Data System (ADS)
Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés
2014-12-01
This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
NASA Astrophysics Data System (ADS)
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.
STS 51-L crewmembers during training session in flight deck simulation
NASA Technical Reports Server (NTRS)
1985-01-01
Shuttle mission simulator (SMS) scene of Astronauts Michael J. Smith, Ellison S. Onizuka, Judith A. Resnik, and Francis R. (Dick) Scobee in their launch and entry positions on the flight deck (46207); Left to right, Backup payload specialist Barbara R. Morgan, Teacher in Space Payload specialist Christa McAuliffe, Hughes Payload specialist Gregory B. Jarvis, and Mission Specialist Ronald E. McNair in the middeck portion of the Shuttle Mission Simulator at JSC (46208).
A Detailed FLUKA-2005 Monte Carlo Simulation for the ATIC Detector
NASA Technical Reports Server (NTRS)
Gunasingha, R. M.; Fazely, A. R.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Chang, J.; Christl, M.; Ganel, O.; Guzik, T. G.
2006-01-01
We have performed a detailed Monte Carlo (MC) calculation for the Advanced thin Ionization Calorimeter (ATIC) detector using the MC code FLUKA-2005 which is capable of simulating particles up to 10 PeV. The ATIC detector has completed two successful balloon flights from McMurdo, Antarctica lasting a total of more than 35 days. ATIC is designed as a multiple, long duration balloon Bight, investigation of the cosmic ray spectra from below 50 GeV to near 100 TeV total energy; using a fully active Bismuth Germanate @GO) calorimeter. It is equipped with a large mosaic of silicon detector pixels capable of charge identification and as a particle tracking system, three projective layers of x-y scintillator hodoscopes were employed, above, in the middle and below a 0.75 nuclear interaction length graphite target. Our calculations are part of an analysis package of both A- and energy-dependences of different nuclei interacting with the ATIC detector. The MC simulates the responses of different components of the detector such as the Simatrix, the scintillator hodoscopes and the BGO calorimeter to various nuclei. We also show comparisons of the FLUKA-2005 MC calculations with a GEANT calculation and data for protons, He and CNO.
NASA Astrophysics Data System (ADS)
Ustinov, E. A.
2017-01-01
The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.
Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach
NASA Astrophysics Data System (ADS)
Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne
We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.
Proneth, Bettina; Pogozheva, Irina D; Portillo, Federico P; Mosberg, Henry I; Haskell-Luevano, Carrie
2008-09-25
The melanocortin-3 and -4 receptors (MC3R, MC4R) have been implicated in energy homeostasis and obesity. Whereas the physiological role of the MC4R is extensively studied, little is known about the MC3R. One caveat is the limited availability of ligands that are selective for the MC3R. Previous studies identified Ac-His-DPhe(p-I)-Arg-Trp-NH 2, which possessed partial agonist/antagonist pharmacology at the mMC3R while retaining full nanomolar agonist pharmacology at the mMC4R. These data allowed for the hypothesis that the DPhe position in melanocortin tetrapeptides can be used to examine ligand side-chain determinants important for differentiation of mMC3R agonist versus antagonist activity. A series of 15 DPhe (7) modified Ac-His-DPhe (7)-Arg-Trp-NH 2 tetrapeptides has been synthesized and pharmacologically characterized. Most notable results include the identification of modifications that resulted in potent antagonists/partial agonists at the mMC3R and full, potent agonists at the mMC4R. These SAR studies provide experimental evidence that the molecular mechanism of antagonism at the mMC3R differentiates this subtype from the mMC4R.
Proneth, Bettina; Pogozheva, Irina D.; Portillo, Federico P.; Mosberg, Henry I.; Haskell-Luevano, Carrie
2010-01-01
The melanocortin-3 and -4 receptors (MC3R, MC4R) have been implicated in energy homeostasis and obesity. Whereas the physiological role of the MC4R is extensively studied, little is known about the MC3R. One caveat is the limited availability of ligands that are selective for the MC3R. Previous studies identified Ac-His-DPhe(p-I)-Arg-Trp-NH2, which possessed partial agonist/antagonist pharmacology at the mMC3R while retaining full nanomolar agonist pharmacology at the mMC4R. These data allowed for the hypothesis that the DPhe position in melanocortin tetrapeptides can be used to examine ligand side-chain determinants important for differentiation of mMC3R agonist versus antagonist activity. A series of 15 DPhe7 modified Ac-His-DPhe7-Arg-Trp-NH2 tetrapeptides has been synthesized and pharmacologically characterized. Most notable results include the identification of modifications that resulted in potent antagonists/partial agonists at the mMC3R and full, potent agonists at the mMC4R. These SAR studies provide experimental evidence that the molecular mechanism of antagonism at the mMC3R differentiates this subtype from the mMC4R. PMID:18800761
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Sangroh; Yoo, Sua; Yin Fangfang
2010-07-15
Purpose: To assess imaging dose of partial and full-angle kilovoltage CBCT scan protocols and to evaluate image quality for each protocol. Methods: The authors obtained the CT dose index (CTDI) of the kilovoltage CBCT protocols in an on-board imager by ion chamber (IC) measurements and Monte Carlo (MC) simulations. A total of six new CBCT scan protocols were evaluated: Standard-dose head (100 kVp, 151 mA s, partial-angle), low-dose head (100 kVp, 75 mA s, partial-angle), high-quality head (100 kVp, 754 mA s, partial-angle), pelvis (125 kVp, 706 mA s, full-angle), pelvis spotlight (125 kVp, 752 mA s, partial-angle), and low-dosemore » thorax (110 kVp, 271 mA s, full-angle). Using the point dose method, various CTDI values were calculated by (1) the conventional weighted CTDI (CTDI{sub w}) calculation and (2) Bakalyar's method (CTDI{sub wb}). The MC simulations were performed to obtain the CTDI{sub w} and CTDI{sub wb}, as well as from (3) central slice averaging (CTDI{sub 2D}) and (4) volume averaging (CTDI{sub 3D}) techniques. The CTDI values of the new protocols were compared to those of the old protocols (full-angle CBCT protocols). Image quality of the new protocols was evaluated following the CBCT image quality assurance (QA) protocol [S. Yoo et al., ''A quality assurance program for the on-board imager registered ,'' Med. Phys. 33(11), 4431-4447 (2006)] testing Hounsfield unit (HU) linearity, spatial linearity/resolution, contrast resolution, and HU uniformity. Results: The CTDI{sub w} were found as 6.0, 3.2, 29.0, 25.4, 23.8, and 7.7 mGy for the new protocols, respectively. The CTDI{sub w} and CTDI{sub wb} differed within +3% between IC measurements and MC simulations. Method (2) results were within {+-}12% of method (1). In MC simulations, the CTDI{sub w} and CTDI{sub wb} were comparable to the CTDI{sub 2D} and CTDI{sub 3D} with the differences ranging from -4.3% to 20.6%. The CTDI{sub 3D} were smallest among all the CTDI values. CTDI{sub w} of the new protocols were found as {approx}14 times lower for standard head scan and 1.8 times lower for standard body scan than the old protocols, respectively. In the image quality QA tests, all the protocols except low-dose head and low-dose thorax protocols were within the tolerance in the HU verification test. The HU value for the two protocols was always higher than the nominal value. All the protocols passed the spatial linearity/resolution and HU uniformity tests. In the contrast resolution test, only high-quality head and pelvis scan protocols were within the tolerance. In addition, crescent effect was found in the partial-angle scan protocols. Conclusions: The authors found that CTDI{sub w} of the new CBCT protocols has been significantly reduced compared to the old protocols with acceptable image quality. The CTDI{sub w} values in the point dose method were close to the volume averaging method within 9%-21% for all the CBCT scan protocols. The Bakalyar's method produced more accurate dose estimation within 14%. The HU inaccuracy from low-dose head and low-dose thorax protocols can render incorrect dose results in the treatment planning system. When high soft-tissue contrast data are desired, high-quality head or pelvis scan protocol is recommended depending on the imaging area. The point dose method can be applicable to estimate CBCT dose with reasonable accuracy in the clinical environment.« less
a Model to Simulate the Radiative Transfer of Fluorescence in a Leaf
NASA Astrophysics Data System (ADS)
Zhao, F.; Ni, Q.
2018-04-01
Light is reflected, transmitted and absorbed by green leaves. Chlorophyll fluorescence (ChlF) is the signal emitted by chlorophyll molecules in the leaf after the absorption of light. ChlF can be used as a direct probe of the functional status of photosynthetic machinery because of its close relationship with photosynthesis. The scattering, absorbing, and emitting properties of leaves are spectrally dependent, which can be simulated by modeling leaf-level fluorescence. In this paper, we proposed a Monte-Carlo (MC) model to simulate the radiative transfer of photons in the leaf. Results show that typical leaf fluorescence spectra can be properly simulated, with two peaks centered at around 685 nm in the red and 740 nm in the far-red regions. By analysing the sensitivity of the input parameters, we found the MC model can well simulate their influence on the emitted fluorescence. Meanwhile we compared results simulated by MC model with those by the Fluspect model. Generally they agree well in the far-red region but deviate in the red region.
"First-principles" kinetic Monte Carlo simulations revisited: CO oxidation over RuO2 (110).
Hess, Franziska; Farkas, Attila; Seitsonen, Ari P; Over, Herbert
2012-03-15
First principles-based kinetic Monte Carlo (kMC) simulations are performed for the CO oxidation on RuO(2) (110) under steady-state reaction conditions. The simulations include a set of elementary reaction steps with activation energies taken from three different ab initio density functional theory studies. Critical comparison of the simulation results reveals that already small variations in the activation energies lead to distinctly different reaction scenarios on the surface, even to the point where the dominating elementary reaction step is substituted by another one. For a critical assessment of the chosen energy parameters, it is not sufficient to compare kMC simulations only to experimental turnover frequency (TOF) as a function of the reactant feed ratio. More appropriate benchmarks for kMC simulations are the actual distribution of reactants on the catalyst's surface during steady-state reaction, as determined by in situ infrared spectroscopy and in situ scanning tunneling microscopy, and the temperature dependence of TOF in the from of Arrhenius plots. Copyright © 2012 Wiley Periodicals, Inc.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-01-01
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-08-27
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A; Zbijewski, W; Stayman, J
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less
Contrast of Backscattered Electron SEM Images of Nanoparticles on Substrates with Complex Structure
Müller, Erich; Fritsch-Decker, Susanne; Hettler, Simon; Störmer, Heike; Weiss, Carsten; Gerthsen, Dagmar
2017-01-01
This study is concerned with backscattered electron scanning electron microscopy (BSE SEM) contrast of complex nanoscaled samples which consist of SiO2 nanoparticles (NPs) deposited on indium-tin-oxide covered bulk SiO2 and glassy carbon substrates. BSE SEM contrast of NPs is studied as function of the primary electron energy and working distance. Contrast inversions are observed which prevent intuitive interpretation of NP contrast in terms of material contrast. Experimental data is quantitatively compared with Monte-Carlo- (MC-) simulations. Quantitative agreement between experimental data and MC-simulations is obtained if the transmission characteristics of the annular semiconductor detector are taken into account. MC-simulations facilitate the understanding of NP contrast inversions and are helpful to derive conditions for optimum material and topography contrast. PMID:29109816
Contrast of Backscattered Electron SEM Images of Nanoparticles on Substrates with Complex Structure.
Kowoll, Thomas; Müller, Erich; Fritsch-Decker, Susanne; Hettler, Simon; Störmer, Heike; Weiss, Carsten; Gerthsen, Dagmar
2017-01-01
This study is concerned with backscattered electron scanning electron microscopy (BSE SEM) contrast of complex nanoscaled samples which consist of SiO 2 nanoparticles (NPs) deposited on indium-tin-oxide covered bulk SiO 2 and glassy carbon substrates. BSE SEM contrast of NPs is studied as function of the primary electron energy and working distance. Contrast inversions are observed which prevent intuitive interpretation of NP contrast in terms of material contrast. Experimental data is quantitatively compared with Monte-Carlo- (MC-) simulations. Quantitative agreement between experimental data and MC-simulations is obtained if the transmission characteristics of the annular semiconductor detector are taken into account. MC-simulations facilitate the understanding of NP contrast inversions and are helpful to derive conditions for optimum material and topography contrast.
NASA Astrophysics Data System (ADS)
Matsui, T.; Dolan, B.; Tao, W. K.; Rutledge, S. A.; Iguchi, T.; Barnum, J. I.; Lang, S. E.
2017-12-01
This study presents polarimetric radar characteristics of intense convective cores derived from observations as well as a polarimetric-radar simulator from cloud resolving model (CRM) simulations from Midlatitude Continental Convective Clouds Experiment (MC3E) May 23 case over Oklahoma and a Tropical Warm Pool-International Cloud Experiment (TWP-ICE) Jan 23 case over Darwin, Australia to highlight the contrast between continental and maritime convection. The POLArimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a state-of-art T-matrix-Mueller-Matrix-based polarimetric radar simulator that can generate synthetic polarimetric radar signals (reflectivity, differential reflectivity, specific differential phase, co-polar correlation) as well as synthetic radar retrievals (precipitation, hydrometeor type, updraft velocity) through the consistent treatment of cloud microphysics and dynamics from CRMs. The Weather Research and Forecasting (WRF) model is configured to simulate continental and maritime severe storms over the MC3E and TWP-ICE domains with the Goddard bulk 4ICE single-moment microphysics and HUCM spectra-bin microphysics. Various statistical diagrams of polarimetric radar signals, hydrometeor types, updraft velocity, and precipitation intensity are investigated for convective and stratiform precipitation regimes and directly compared between MC3E and TWP-ICE cases. The result shows MC3E convection is characterized with very strong reflectivity (up to 60dBZ), slight negative differential reflectivity (-0.8 0 dB) and near-zero specific differential phase above the freezing levels. On the other hand, TWP-ICE convection shows strong reflectivity (up to 50dBZ), slight positive differential reflectivity (0 1.0 dB) and differential phase (0 0.8 dB/km). Hydrometeor IDentification (HID) algorithm from the observation and simulations detect hail-dominant convection core in MC3E, while graupel-dominant convection core in TWP-ICE. This land-ocean contrast agrees with the previous studies using the radar and radiometer signals from TRMM satellite climatology associated with warm-cloud depths and vertical structure of buoyancy.
NASA Astrophysics Data System (ADS)
Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-08-01
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.
Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-07-17
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.
Monte Carlo simulations to replace film dosimetry in IMRT verification.
Goetzfried, Thomas; Rickhey, Mark; Treutwein, Marius; Koelbl, Oliver; Bogner, Ludwig
2011-01-01
Patient-specific verification of intensity-modulated radiation therapy (IMRT) plans can be done by dosimetric measurements or by independent dose or monitor unit calculations. The aim of this study was the clinical evaluation of IMRT verification based on a fast Monte Carlo (MC) program with regard to possible benefits compared to commonly used film dosimetry. 25 head-and-neck IMRT plans were recalculated by a pencil beam based treatment planning system (TPS) using an appropriate quality assurance (QA) phantom. All plans were verified both by film and diode dosimetry and compared to MC simulations. The irradiated films, the results of diode measurements and the computed dose distributions were evaluated, and the data were compared on the basis of gamma maps and dose-difference histograms. Average deviations in the high-dose region between diode measurements and point dose calculations performed with the TPS and MC program were 0.7 ± 2.7% and 1.2 ± 3.1%, respectively. For film measurements, the mean gamma values with 3% dose difference and 3mm distance-to-agreement were 0.74 ± 0.28 (TPS as reference) with dose deviations up to 10%. Corresponding values were significantly reduced to 0.34 ± 0.09 for MC dose calculation. The total time needed for both verification procedures is comparable, however, by far less labor intensive in the case of MC simulations. The presented study showed that independent dose calculation verification of IMRT plans with a fast MC program has the potential to eclipse film dosimetry more and more in the near future. Thus, the linac-specific QA part will necessarily become more important. In combination with MC simulations and due to the simple set-up, point-dose measurements for dosimetric plausibility checks are recommended at least in the IMRT introduction phase. Copyright © 2010. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Sud, Y. C.; Walker, G. K.
1999-09-01
A prognostic cloud scheme named McRAS (Microphysics of Clouds with Relaxed Arakawa-Schubert Scheme) has been designed and developed with the aim of improving moist processes, microphysics of clouds, and cloud-radiation interactions in GCMs. McRAS distinguishes three types of clouds: convective, stratiform, and boundary layer. The convective clouds transform and merge into stratiform clouds on an hourly timescale, while the boundary layer clouds merge into the stratiform clouds instantly. The cloud condensate converts into precipitation following the autoconversion equations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, as well as diffuse both horizontally and vertically with a fully interactive cloud microphysics throughout the life cycle of the cloud, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry.An evaluation of McRAS in a single-column model (SCM) with the Global Atmospheric Research Program Atlantic Tropical Experiment (GATE) Phase III data has shown that, together with the rest of the model physics, McRAS can simulate the observed temperature, humidity, and precipitation without discernible systematic errors. The time history and time mean in-cloud water and ice distribution, fractional cloudiness, cloud optical thickness, origin of precipitation in the convective anvils and towers, and the convective updraft and downdraft velocities and mass fluxes all simulate a realistic behavior. Some of these diagnostics are not verifiable with data on hand. These SCM sensitivity tests show that (i) without clouds the simulated GATE-SCM atmosphere is cooler than observed; (ii) the model's convective scheme, RAS, is an important subparameterization of McRAS; and (iii) advection of cloud water substance is helpful in simulating better cloud distribution and cloud-radiation interaction. An evaluation of the performance of McRAS in the Goddard Earth Observing System II GCM is given in a companion paper (Part II).
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320
Impact gages for detecting meteoroid and other orbital debris impacts on space vehicles.
NASA Technical Reports Server (NTRS)
Mastandrea, J. R.; Scherb, M. V.
1973-01-01
Impacts on space vehicles have been simulated using the McDonnell Douglas Aerophysics Laboratory (MDAL) Light-Gas Guns to launch particles at hypervelocity speeds into scaled space structures. Using impact gages and a triangulation technique, these impacts have been detected and accurately located. This paper describes in detail the various types of impact gages (piezoelectric PZT-5A, quartz, electret, and off-the-shelf plastics) used. This description includes gage design and experimental results for gages installed on single-walled scaled payload carriers, multiple-walled satellites and space stations, and single-walled full-scale Delta tank structures. A brief description of the triangulation technique, the impact simulation, and the data acquisition system are also included.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur
2012-01-01
One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF
Astronaut William S. McArthur in training for contingency EVA in WETF
NASA Technical Reports Server (NTRS)
1993-01-01
Astronaut William S. McArthur, mission specialist, participates in training for contingency extravehicular activity (EVA) for the STS-58 mission. He is wearing the extravehicular mobility unit (EMU) minus his helmet. For simulation purposes, McArthur was about to be submerged to a point of neutral buoyancy in the JSC Weightless Environment Training Facility (WETF).
Poster — Thur Eve — 47: Monte Carlo Simulation of Scp, Sc and Sp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.
The in-water output ratio (Scp), in-air output ratio (Sc), and phantom scattering factor (Sp) are important parameters for radiotherapy dose calculation. Experimentally, Scp is obtained by measuring the dose rate ratio in water phantom, and Sc the water Kerma rate ratio in air. There is no method that allows direct measurement of Sp. Monte Carlo (MC) method has been used to simulate Scp and Sc in literatures, similar to experimental setup, but no MC direct simulation of Sp available yet to the best of our knowledge. We propose in this report a method of performing direct MC simulation of Sp.more » Starting from the definition, we derived that Sp of a clinical photon beam can be approximated by the ratio of the dose rates contributed from the primary beam for a given field size to the reference field size. Since only the primary beam is used, any Linac head scattering should be excluded from the simulation, which can be realized by using the incident electron as a scoring parameter for MU. We performed MC simulations for Scp, Sc and Sp. Scp matches well with golden beam data. Sp obtained by the proposed method agrees well with what is obtained using the traditional method, Sp=Scp/Sc. Since the smaller the field size, the more the primary beam dominates, our Sp simulation method is accurate for small field. By analyzing the calculated data, we found that this method can be used with no problem for large fields. The difference it introduced is clinically insignificant.« less
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
Oxidation of a new Biogenic VOC: Chamber Studies of the Atmospheric Chemistry of Methyl Chavicol
NASA Astrophysics Data System (ADS)
Bloss, William; Alam, Mohammed; Adbul Raheem, Modinah; Rickard, Andrew; Hamilton, Jacqui; Pereira, Kelly; Camredon, Marie; Munoz, Amalia; Vazquez, Monica; Vera, Teresa; Rodenas, Mila
2013-04-01
The oxidation of volatile organic compounds (VOCs) leads to formation of ozone and SOA, with consequences for air quality, health, crop yields, atmospheric chemistry and radiative transfer. Recent observations have identified Methyl Chavicol ("MC": Estragole; 1-allyl-4-methoxybenzene, C10H12O) as a major BVOC above pine forests in the USA, and oil palm plantations in Malaysian Borneo. Palm oil cultivation, and hence MC emissions, may be expected to increase with societal food and bio fuel demand. We present the results of a series of simulation chamber experiments to assess the atmospheric fate of MC. Experiments were performed in the EUPHORE facility, monitoring stable product species, radical intermediates, and aerosol production and composition. We determine rate constants for reaction of MC with OH and O3, and ozonolysis radical yields. Stable product measurements (FTIR, PTRMS, GC-SPME) are used to determine the yields of stable products formed from OH- and O3- initiated oxidation, and to develop an understanding of the initial stages of the MC degradation chemistry. A surrogate mechanism approach is used to simulate MC degradation within the MCM, evaluated in terms of ozone production measured in the chamber experiments, and applied to quantify the role of MC in the real atmosphere.
Using McStas for modelling complex optics, using simple building bricks
NASA Astrophysics Data System (ADS)
Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim
2011-04-01
The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.
NASA Astrophysics Data System (ADS)
Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.
2016-12-01
One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.
A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.
2005-01-01
This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.
Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems
Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...
2017-11-07
We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less
Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk
We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less
Evaluation of PeneloPET Simulations of Biograph PET/CT Scanners
NASA Astrophysics Data System (ADS)
Abushab, K. M.; Herraiz, J. L.; Vicente, E.; Cal-González, J.; España, S.; Vaquero, J. J.; Jakoby, B. W.; Udías, J. M.
2016-06-01
Monte Carlo (MC) simulations are widely used in positron emission tomography (PET) for optimizing detector design, acquisition protocols, and evaluating corrections and reconstruction methods. PeneloPET is a MC code based on PENELOPE, for PET simulations which considers detector geometry, acquisition electronics and materials, and source definitions. While PeneloPET has been successfully employed and validated with small animal PET scanners, it required a proper validation with clinical PET scanners including time-of-flight (TOF) information. For this purpose, we chose the family of Biograph PET/CT scanners: the Biograph True-Point (B-TP), Biograph True-Point with TrueV (B-TPTV) and the Biograph mCT. They have similar block detectors and electronics, but a different number of rings and configuration. Some effective parameters of the simulations, such as the dead-time and the size of the reflectors in the detectors, were adjusted to reproduce the sensitivity and noise equivalent count (NEC) rate of the B-TPTV scanner. These parameters were then used to make predictions of experimental results such as sensitivity, NEC rate, spatial resolution, and scatter fraction (SF), from all the Biograph scanners and some variations of them (energy windows and additional rings of detectors). Predictions agree with the measured values for the three scanners, within 7% (sensitivity and NEC rate) and 5% (SF). The resolution obtained for the B-TPTV is slightly better (10%) than the experimental values. In conclusion, we have shown that PeneloPET is suitable for simulating and investigating clinical systems with good accuracy and short computational time, though some effort tuning of a few parameters of the scanners modeled may be needed in case that the full details of the scanners studied are not available.
Game of Life on the Equal Degree Random Lattice
NASA Astrophysics Data System (ADS)
Shao, Zhi-Gang; Chen, Tao
2010-12-01
An effective matrix method is performed to build the equal degree random (EDR) lattice, and then a cellular automaton game of life on the EDR lattice is studied by Monte Carlo (MC) simulation. The standard mean field approximation (MFA) is applied, and then the density of live cells is given ρ=0.37017 by MFA, which is consistent with the result ρ=0.37±0.003 by MC simulation.
Validation of Shielding Analysis Capability of SuperMC with SINBAD
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing
2017-09-01
Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.
Influence of photon energy cuts on PET Monte Carlo simulation results.
Mitev, Krasimir; Gerganov, Georgi; Kirov, Assen S; Schmidtlein, C Ross; Madzhunkov, Yordan; Kawrakow, Iwan
2012-07-01
The purpose of this work is to study the influence of photon energy cuts on the results of positron emission tomography (PET) Monte Carlo (MC) simulations. MC simulations of PET scans of a box phantom and the NEMA image quality phantom are performed for 32 photon energy cut values in the interval 0.3-350 keV using a well-validated numerical model of a PET scanner. The simulations are performed with two MC codes, egs_pet and GEANT4 Application for Tomographic Emission (GATE). The effect of photon energy cuts on the recorded number of singles, primary, scattered, random, and total coincidences as well as on the simulation time and noise-equivalent count rate is evaluated by comparing the results for higher cuts to those for 1 keV cut. To evaluate the effect of cuts on the quality of reconstructed images, MC generated sinograms of PET scans of the NEMA image quality phantom are reconstructed with iterative statistical reconstruction. The effects of photon cuts on the contrast recovery coefficients and on the comparison of images by means of commonly used similarity measures are studied. For the scanner investigated in this study, which uses bismuth germanate crystals, the transport of Bi X(K) rays must be simulated in order to obtain unbiased estimates for the number of singles, true, scattered, and random coincidences as well as for an unbiased estimate of the noise-equivalent count rate. Photon energy cuts higher than 170 keV lead to absorption of Compton scattered photons and strongly increase the number of recorded coincidences of all types and the noise-equivalent count rate. The effect of photon cuts on the reconstructed images and the similarity measures used for their comparison is statistically significant for very high cuts (e.g., 350 keV). The simulation time decreases slowly with the increase of the photon cut. The simulation of the transport of characteristic x rays plays an important role, if an accurate modeling of a PET scanner system is to be achieved. The simulation time decreases slowly with the increase of the cut which, combined with the accuracy loss at high cuts, means that the usage of high photon energy cuts is not recommended for the acceleration of MC simulations.
Thomson, R; Kawrakow, I
2012-06-01
Widely-used classical trajectory Monte Carlo simulations of low energy electron transport neglect the quantum nature of electrons; however, at sub-1 keV energies quantum effects have the potential to become significant. This work compares quantum and classical simulations within a simplified model of electron transport in water. Electron transport is modeled in water droplets using quantum mechanical (QM) and classical trajectory Monte Carlo (MC) methods. Water droplets are modeled as collections of point scatterers representing water molecules from which electrons may be isotropically scattered. The role of inelastic scattering is investigated by introducing absorption. QM calculations involve numerically solving a system of coupled equations for the electron wavefield incident on each scatterer. A minimum distance between scatterers is introduced to approximate structured water. The average QM water droplet incoherent cross section is compared with the MC cross section; a relative error (RE) on the MC results is computed. RE varies with electron energy, average and minimum distances between scatterers, and scattering amplitude. The mean free path is generally the relevant length scale for estimating RE. The introduction of a minimum distance between scatterers increases RE substantially (factors of 5 to 10), suggesting that the structure of water must be modeled for accurate simulations. Inelastic scattering does not improve agreement between QM and MC simulations: for the same magnitude of elastic scattering, the introduction of inelastic scattering increases RE. Droplet cross sections are sensitive to droplet size and shape; considerable variations in RE are observed with changing droplet size and shape. At sub-1 keV energies, quantum effects may become non-negligible for electron transport in condensed media. Electron transport is strongly affected by the structure of the medium. Inelastic scatter does not improve agreement between QM and MC simulations of low energy electron transport in condensed media. © 2012 American Association of Physicists in Medicine.
Absolute dose calculations for Monte Carlo simulations of radiotherapy beams.
Popescu, I A; Shaw, C P; Zavgorodni, S F; Beckham, W A
2005-07-21
Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 x 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 x 10(13) +/- 1.0% electrons incident on the target and a total dose of 20.87 cGy +/- 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.
Dosimetry applications in GATE Monte Carlo toolkit.
Papadimitroulas, Panagiotis
2017-09-01
Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Underwood, T. S. A.; Sung, W.; McFadden, C. H.; McMahon, S. J.; Hall, D. C.; McNamara, A. L.; Paganetti, H.; Sawakuchi, G. O.; Schuemann, J.
2017-04-01
Whilst Monte Carlo (MC) simulations of proton energy deposition have been well-validated at the macroscopic level, their microscopic validation remains lacking. Equally, no gold-standard yet exists for experimental metrology of individual proton tracks. In this work we compare the distributions of stochastic proton interactions simulated using the TOPAS-nBio MC platform against confocal microscope data for Al2O3:C,Mg fluorescent nuclear track detectors (FNTDs). We irradiated 8× 4× 0.5 mm3 FNTD chips inside a water phantom, positioned at seven positions along a pristine proton Bragg peak with a range in water of 12 cm. MC simulations were implemented in two stages: (1) using TOPAS to model the beam properties within a water phantom and (2) using TOPAS-nBio with Geant4-DNA physics to score particle interactions through a water surrogate of Al2O3:C,Mg. The measured median track integrated brightness (IB) was observed to be strongly correlated to both (i) voxelized track-averaged linear energy transfer (LET) and (ii) frequency mean microdosimetric lineal energy, \\overline{{{y}F}} , both simulated in pure water. Histograms of FNTD track IB were compared against TOPAS-nBio histograms of the number of terminal electrons per proton, scored in water with mass-density scaled to mimic Al2O3:C,Mg. Trends between exposure depths observed in TOPAS-nBio simulations were experimentally replicated in the study of FNTD track IB. Our results represent an important first step towards the experimental validation of MC simulations on the sub-cellular scale and suggest that FNTDs can enable experimental study of the microdosimetric properties of individual proton tracks.
Underwood, T S A; Sung, W; McFadden, C H; McMahon, S J; Hall, D C; McNamara, A L; Paganetti, H; Sawakuchi, G O; Schuemann, J
2017-04-21
Whilst Monte Carlo (MC) simulations of proton energy deposition have been well-validated at the macroscopic level, their microscopic validation remains lacking. Equally, no gold-standard yet exists for experimental metrology of individual proton tracks. In this work we compare the distributions of stochastic proton interactions simulated using the TOPAS-nBio MC platform against confocal microscope data for Al 2 O 3 :C,Mg fluorescent nuclear track detectors (FNTDs). We irradiated [Formula: see text] mm 3 FNTD chips inside a water phantom, positioned at seven positions along a pristine proton Bragg peak with a range in water of 12 cm. MC simulations were implemented in two stages: (1) using TOPAS to model the beam properties within a water phantom and (2) using TOPAS-nBio with Geant4-DNA physics to score particle interactions through a water surrogate of Al 2 O 3 :C,Mg. The measured median track integrated brightness (IB) was observed to be strongly correlated to both (i) voxelized track-averaged linear energy transfer (LET) and (ii) frequency mean microdosimetric lineal energy, [Formula: see text], both simulated in pure water. Histograms of FNTD track IB were compared against TOPAS-nBio histograms of the number of terminal electrons per proton, scored in water with mass-density scaled to mimic Al 2 O 3 :C,Mg. Trends between exposure depths observed in TOPAS-nBio simulations were experimentally replicated in the study of FNTD track IB. Our results represent an important first step towards the experimental validation of MC simulations on the sub-cellular scale and suggest that FNTDs can enable experimental study of the microdosimetric properties of individual proton tracks.
A Comparison of Experimental EPMA Data and Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2004-01-01
Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.
Measuring Virtual Simulations Value in Training Exercises - USMC Use Case
2015-12-04
and cost avoidance and Capt Jonathan Richardson, PM TRASYS, who was the primary author for the After-Action Documentation and Analysis Report ...REFERENCES Cermak J. & McGurk M. (2010, July). Putting a Value On Training. McKinsey Quarterly. Retrieved June 10, 2015 from http://www.mckinsey.com...www.hqmc.marines.mil/Portals/142/Docs/2015CPG_Color.pdf Gordon, S. & Cooley, T. (2013) Phase One Final Report : Cost Avoidance Study of USMC Simulation
NASA Astrophysics Data System (ADS)
Lépinoux, J.; Sigli, C.
2018-01-01
In a recent paper, the authors showed how the clusters free energies are constrained by the coagulation probability, and explained various anomalies observed during the precipitation kinetics in concentrated alloys. This coagulation probability appeared to be a too complex function to be accurately predicted knowing only the cluster distribution in Cluster Dynamics (CD). Using atomistic Monte Carlo (MC) simulations, it is shown that during a transformation at constant temperature, after a short transient regime, the transformation occurs at quasi-equilibrium. It is proposed to use MC simulations until the system quasi-equilibrates then to switch to CD which is mean field but not limited by a box size like MC. In this paper, we explain how to take into account the information available before the quasi-equilibrium state to establish guidelines to safely predict the cluster free energies.
MC21 analysis of the MIT PWR benchmark: Hot zero power results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.
2013-07-01
MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less
NASA Astrophysics Data System (ADS)
Katsoulakis, Markos A.; Vlachos, Dionisios G.
2003-11-01
We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.
Monte Carlo decision curve analysis using aggregate data.
Hozo, Iztok; Tsalatsanis, Athanasios; Djulbegovic, Benjamin
2017-02-01
Decision curve analysis (DCA) is an increasingly used method for evaluating diagnostic tests and predictive models, but its application requires individual patient data. The Monte Carlo (MC) method can be used to simulate probabilities and outcomes of individual patients and offers an attractive option for application of DCA. We constructed a MC decision model to simulate individual probabilities of outcomes of interest. These probabilities were contrasted against the threshold probability at which a decision-maker is indifferent between key management strategies: treat all, treat none or use predictive model to guide treatment. We compared the results of DCA with MC simulated data against the results of DCA based on actual individual patient data for three decision models published in the literature: (i) statins for primary prevention of cardiovascular disease, (ii) hospice referral for terminally ill patients and (iii) prostate cancer surgery. The results of MC DCA and patient data DCA were identical. To the extent that patient data DCA were used to inform decisions about statin use, referral to hospice or prostate surgery, the results indicate that MC DCA could have also been used. As long as the aggregate parameters on distribution of the probability of outcomes and treatment effects are accurately described in the published reports, the MC DCA will generate indistinguishable results from individual patient data DCA. We provide a simple, easy-to-use model, which can facilitate wider use of DCA and better evaluation of diagnostic tests and predictive models that rely only on aggregate data reported in the literature. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hee Jung; Department of Biomedical Engineering, Seoul National University, Seoul; Department of Radiation Oncology, Soonchunhyang University Hospital, Seoul
2015-01-01
To investigate how accurately treatment planning systems (TPSs) account for the tongue-and-groove (TG) effect, Monte Carlo (MC) simulations and radiochromic film (RCF) measurements were performed for comparison with TPS results. Two commercial TPSs computed the TG effect for Varian Millennium 120 multileaf collimator (MLC). The TG effect on off-axis dose profile at 3 depths of solid water was estimated as the maximum depth and the full width at half maximum (FWHM) of the dose dip at an interleaf position. When compared with the off-axis dose of open field, the maximum depth of the dose dip for MC and RCF rangedmore » from 10.1% to 20.6%; the maximum depth of the dose dip gradually decreased by up to 8.7% with increasing depths of 1.5 to 10 cm and also by up to 4.1% with increasing off-axis distances of 0 to 13 cm. However, TPS results showed at most a 2.7% decrease for the same depth range and a negligible variation for the same off-axis distances. The FWHM of the dose dip was approximately 0.19 cm for MC and 0.17 cm for RCF, but 0.30 cm for Eclipse TPS and 0.45 cm for Pinnacle TPS. Accordingly, the integrated value of TG dose dip for TPS was larger than that for MC and RCF and almost invariant along the depths and off-axis distances. We concluded that the TG dependence on depth and off-axis doses shown in the MC and RCF results could not be appropriately modeled by the TPS versions in this study.« less
The Flash ADC system and PMT waveform reconstruction for the Daya Bay experiment
NASA Astrophysics Data System (ADS)
Huang, Yongbo; Chang, Jinfan; Cheng, Yaping; Chen, Zhang; Hu, Jun; Ji, Xiaolu; Li, Fei; Li, Jin; Li, Qiuju; Qian, Xin; Jetter, Soeren; Wang, Wei; Wang, Zheng; Xu, Yu; Yu, Zeyuan
2018-07-01
To better understand the energy response of the Antineutrino Detector (AD), the Daya Bay Reactor Neutrino Experiment installed a full Flash ADC readout system on one AD that allowed for simultaneous data taking with the current readout system. This paper presents the design, data acquisition, and simulation of the Flash ADC system, and focuses on the PMT waveform reconstruction algorithms. For liquid scintillator calorimetry, the most critical requirement to waveform reconstruction is linearity. Several common reconstruction methods were tested but the linearity performance was not satisfactory. A new method based on the deconvolution technique was developed with 1% residual non-linearity, which fulfills the requirement. The performance was validated with both data and Monte Carlo (MC) simulations, and 1% consistency between them has been achieved.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Bai, T
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less
Bhaskaran, Abhishek; Barry, M A Tony; Al Raisi, Sara I; Chik, William; Nguyen, Doan Trang; Pouliopoulos, Jim; Nalliah, Chrishan; Hendricks, Roger; Thomas, Stuart; McEwan, Alistair L; Kovoor, Pramesh; Thiagalingam, Aravinda
2015-10-01
Magnetic navigation system (MNS) ablation was suspected to be less effective and unstable in highly mobile cardiac regions compared to radiofrequency (RF) ablations with manual control (MC). The aim of the study was to compare the (1) lesion size and (2) stability of MNS versus MC during irrigated RF ablation with and without simulated mechanical heart wall motion. In a previously validated myocardial phantom, the performance of Navistar RMT Thermocool catheter (Biosense Webster, CA, USA) guided with MNS was compared to manually controlled Navistar irrigated Thermocool catheter (Biosense Webster, CA, USA). The lesion dimensions were compared with the catheter in inferior and superior orientation, with and without 6-mm simulated wall motion. All ablations were performed with 40 W power and 30 ml/ min irrigation for 60 s. A total of 60 ablations were performed. The mean lesion volumes with MNS and MC were 57.5 ± 7.1 and 58.1 ± 7.1 mm(3), respectively, in the inferior catheter orientation (n = 23, p = 0.6), 62.8 ± 9.9 and 64.6 ± 7.6 mm(3), respectively, in the superior catheter orientation (n = 16, p = 0.9). With 6-mm simulated wall motion, the mean lesion volumes with MNS and MC were 60.2 ± 2.7 and 42.8 ± 8.4 mm(3), respectively, in the inferior catheter orientation (n = 11, p = <0.01*), 74.1 ± 5.8 and 54.2 ± 3.7 mm(3), respectively, in the superior catheter orientation (n = 10, p = <0.01*). During 6-mm simulated wall motion, the MC catheter and MNS catheter moved 5.2 ± 0.1 and 0 mm, respectively, in inferior orientation and 5.5 ± 0.1 and 0 mm, respectively, in the superior orientation on the ablation surface. The lesion dimensions were larger with MNS compared to MC in the presence of simulated wall motion, consistent with greater catheter stability. However, similar lesion dimensions were observed in the stationary model.
Wan Chan Tseung, H; Ma, J; Beltran, C
2015-06-01
Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically applicable MC-based IMPT treatment planning system. The detailed nuclear modeling will allow us to perform very fast linear energy transfer and neutron dose estimates on the GPU.
A novel Monte Carlo algorithm for simulating crystals with McStas
NASA Astrophysics Data System (ADS)
Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.
2004-07-01
We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
2012-01-22
Computational Mechanics, 2008; 43:3–37. [15] Bazilevs Y, Hsu MC, Kiendl J, Wuechner R, Bletzinger KU. 3D Simulation of Wind Turbine Rotors at Full Scale. Part II...0 and Ψy = 0 on the left, right and bottom boundaries (“no slip ” requirement), Ψx = 0 and Ψx = 1 on the top boundary (the driven surface). At all...superposition of tensile membrane and bending stress, the maximum von Mises stress occurs at the sharp reentrant bend, where the loaded boundary ring bends
John Glenn during preflight training for STS-95
1998-04-14
S98-06946 (28 April 1998) --- U.S. Sen. John H. Glenn Jr. (D.-Ohio), uses a device called a Sky genie to simulate rappelling from a troubled Space Shuttle during training at the Johnson Space Center (JSC). This training mockup is called The full fuselage trainer (FFT). Glenn has been named as a payload specialist for STS-95, scheduled for launch later this year. This exercise, in the systems integration facility at JSC, trains the crew members for procedures to follow in egressing a troubled shuttle on the ground. Photo Credit: Joe McNally, National Geographic, for NASA
Numerical Simulation on a Possible Formation Mechanism of Interplanetary Magnetic Cloud Boundaries
NASA Astrophysics Data System (ADS)
Fan, Quan-Lin; Wei, Feng-Si; Feng, Xue-Shang
2003-08-01
The formation mechanism of the interplanetary magnetic cloud (MC) boundaries is numerically investigated by simulating the interactions between an MC of some initial momentum and a local interplanetary current sheet. The compressible 2.5D MHD equations are solved. Results show that the magnetic reconnection process is a possible formation mechanism when an MC interacts with a surrounding current sheet. A number of interesting features are found. For instance, the front boundary of the MCs is a magnetic reconnection boundary that could be caused by a driven reconnection ahead of the cloud, and the tail boundary might be caused by the driving of the entrained flow as a result of the Bernoulli principle. Analysis of the magnetic field and plasma data demonstrates that at these two boundaries appear large value of the plasma parameter β, clear increase of plasma temperature and density, distinct decrease of magnetic magnitude, and a transition of magnetic field direction of about 180 degrees. The outcome of the present simulation agrees qualitatively with the observational results on MC boundary inferred from IMP-8, etc. The project supported by National Natural Science Foundation of China under Grant Nos. 40104006, 49925412, and 49990450
Improved QM Methods and Their Application in QM/MM Studies of Enzymatic Reactions
NASA Astrophysics Data System (ADS)
Jorgensen, William L.
2007-03-01
Quantum mechanics (QM) and Monte Carlo statistical mechanics (MC) simulations have been used by us since the early 1980s to study reaction mechanisms and the origin of solvent effects on reaction rates. A goal was always to perform the QM and MC/MM calculations simultaneously in order to obtain free-energy surfaces in solution with no geometrical restrictions. This was achieved by 2002 and complete free-energy profiles and surfaces with full sampling of solute and solvent coordinates can now be obtained through one job submission using BOSS [JCC 2005, 26, 1689]. Speed and accuracy demands also led to development of the improved semiempirical QM method, PDDG-PM3 [JCC 1601 (2002); JCTC 817 (2005)]. The combined PDDG-PM3/MC/FEP methodology has provided excellent results for free energies of activation for many reactions in numerous solvents. Recent examples include Cope, Kemp and E1cb eliminations [JACS 8829 (2005), 6141 (2006); JOC 4896 (2006)], as well as enzymatic reactions catalyzed by the putative Diels-Alderase, macrophomate synthase, and fatty-acid amide hydrolase [JACS 3577 (2005); JACS (2006)]. The presentation will focus on the accuracy that is currently achievable in such QM/MM studies and the accuracy of the underlying QM methodology including extensive comparisons of results from PDDG-PM3 and ab initio DFT methods.
Radiation Measurements in Simulated Ablation Layers
2010-12-06
J.Spacecraft & Rockets, V35, No 6, 1998, pp 729-735. D‟Souza MG, Eichmann TN, Mudford NR, Potter DF, Morgan RG, McIntyre TJ, Jacobs PA (2009...gases. D. Phil Thesis. Oxford University 1976 Potter, D., Eichmann , T., Brandis, A., Morgan, R., Jacobs, P., McIntyre, T., “Simulation of radiating...Heatshield Material. 46th AIAA Aerospace Sciences Meeting and Exhibit, AIAA2008-1202, Reno, USA. D‟Souza, M.G., Eichmann , T.N., Mudford, N.R., Potter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Ellis; Derek Gaston; Benoit Forget
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less
New simulation model of multicomponent crystal growth and inhibition.
Wathen, Brent; Kuiper, Michael; Walker, Virginia; Jia, Zongchao
2004-04-02
We review a novel computational model for the study of crystal structures both on their own and in conjunction with inhibitor molecules. The model advances existing Monte Carlo (MC) simulation techniques by extending them from modeling 3D crystal surface patches to modeling entire 3D crystals, and by including the use of "complex" multicomponent molecules within the simulations. These advances makes it possible to incorporate the 3D shape and non-uniform surface properties of inhibitors into simulations, and to study what effect these inhibitor properties have on the growth of whole crystals containing up to tens of millions of molecules. The application of this extended MC model to the study of antifreeze proteins (AFPs) and their effects on ice formation is reported, including the success of the technique in achieving AFP-induced ice-growth inhibition with concurrent changes to ice morphology that mimic experimental results. Simulations of ice-growth inhibition suggest that the degree of inhibition afforded by an AFP is a function of its ice-binding position relative to the underlying anisotropic growth pattern of ice. This extended MC technique is applicable to other crystal and crystal-inhibitor systems, including more complex crystal systems such as clathrates.
An adaptive bias - hybrid MD/kMC algorithm for protein folding and aggregation.
Peter, Emanuel K; Shea, Joan-Emma
2017-07-05
In this paper, we present a novel hybrid Molecular Dynamics/kinetic Monte Carlo (MD/kMC) algorithm and apply it to protein folding and aggregation in explicit solvent. The new algorithm uses a dynamical definition of biases throughout the MD component of the simulation, normalized in relation to the unbiased forces. The algorithm guarantees sampling of the underlying ensemble in dependency of one average linear coupling factor 〈α〉 τ . We test the validity of the kinetics in simulations of dialanine and compare dihedral transition kinetics with long-time MD-simulations. We find that for low 〈α〉 τ values, kinetics are in good quantitative agreement. In folding simulations of TrpCage and TrpZip4 in explicit solvent, we also find good quantitative agreement with experimental results and prior MD/kMC simulations. Finally, we apply our algorithm to study growth of the Alzheimer Amyloid Aβ 16-22 fibril by monomer addition. We observe two possible binding modes, one at the extremity of the fibril (elongation) and one on the surface of the fibril (lateral growth), on timescales ranging from ns to 8 μs.
A new method for shape and texture classification of orthopedic wear nanoparticles.
Zhang, Dongning; Page, Janet R; Kavanaugh, Aaron E; Billi, Fabrizio
2012-09-27
Detailed morphologic analysis of particles produced during wear of orthopedic implants is important in determining a correlation among material, wear, and biological effects. However, the use of simple shape descriptors is insufficient to categorize the data and to compare the nature of wear particles generated by different implants. An approach based on Discrete Fourier Transform (DFT) is presented for describing particle shape and surface texture. Four metal-on-metal bearing couples were tested in an orbital wear simulator under standard and adverse (steep-angled cups) wear simulator conditions. Digitized Scanning Electron Microscope (SEM) images of the wear particles were imported into MATLAB to carry out Fourier descriptor calculations via a specifically developed algorithm. The descriptors were then used for studying particle characteristics (shape and texture) as well as for cluster classification. Analysis of the particles demonstrated the validity of the proposed model by showing that steep-angle Co-Cr wear particles were more asymmetric, compressed, extended, triangular, square, and roughened at 3 Mc than after 0.25 Mc. In contrast, particles from standard angle samples were only more compressed and extended after 3 Mc compared to 0.25 Mc. Cluster analysis revealed that the 0.25 Mc steep-angle particle distribution was a subset of the 3 Mc distribution.
Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc
2015-01-01
To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
NASA Astrophysics Data System (ADS)
Limbu, Dil; Biswas, Parthapratim
We present a simple and efficient Monte-Carlo (MC) simulation of Iron (Fe) and Nickel (Ni) clusters with N =5-100 and amorphous Silicon (a-Si) starting from a random configuration. Using Sutton-Chen and Finnis-Sinclair potentials for Ni (in fcc lattice) and Fe (in bcc lattice), and Stillinger-Weber potential for a-Si, respectively, the total energy of the system is optimized by employing MC moves that include both the stochastic nature of MC simulations and the gradient of the potential function. For both iron and nickel clusters, the energy of the configurations is found to be very close to the values listed in the Cambridge Cluster Database, whereas the maximum force on each cluster is found to be much lower than the corresponding value obtained from the optimized structural configurations reported in the database. An extension of the method to model the amorphous state of Si is presented and the results are compared with experimental data and those obtained from other simulation methods. The work is partially supported by the NSF under Grant Number DMR 1507166.
NASA Astrophysics Data System (ADS)
Yonezawa, Yasushige; Shimoyama, Hiromitsu; Nakamura, Haruki
2011-01-01
Multicanonical molecular-dynamics (McMD) simulation and Metadynamics (MetaD) are useful for obtaining the free-energies, and can be mutually complementary. We combined McMD with MetaD, and applied it to the conformational free energy calculations of a proline dipeptide. First, MetaD was performed along the dihedral angle at the prolyl bond and we obtained a coarse biasing potential. After adding the biasing potential to the dihedral angle potential energy, we conducted McMD with the modified potential energy. Enhanced sampling was achieved for all degrees-of-freedom, and the sampling of the dihedral angle space was facilitated. After reweighting, we obtained an accurate free energy landscape.
Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object
NASA Astrophysics Data System (ADS)
Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jeanpierre; Crespin, Sylvain
2011-03-01
Last year, we demonstrated the feasibility of a new method to perform dosimetric quality control of Treatment Planning Systems in radiotherapy, this method is based on Monte-Carlo simulations and uses anatomical Digital Test Objects (DTOs). The pelvic DTO was used in order to assess this new method on an ECLIPSE VARIAN Treatment Planning System. Large dose variations were observed particularly in air and bone equivalent material. In this current work, we discuss the results of the previous paper and provide an explanation for observed dose differences, the VARIAN Eclipse (Anisotropic Analytical) algorithm was investigated. Monte Carlo simulations (MC) were performed with a PENELOPE code version 2003. To increase efficiency of MC simulations, we have used our parallelized version based on the standard MPI (Message Passing Interface). The parallel code has been run on a 32- processor SGI cluster. The study was carried out using pelvic DTOs and was performed for low- and high-energy photon beams (6 and 18MV) on 2100CD VARIAN linear accelerator. A square field (10x10 cm2) was used. Assuming the MC data as reference, χ index analyze was carried out. For this study, a distance to agreement (DTA) was set to 7mm while the dose difference was set to 5% as recommended in the TRS-430 and TG-53 (on the beam axis in 3-D inhomogeneities). When using Monte Carlo PENELOPE, the absorbed dose is computed to the medium, however the TPS computes dose to water. We have used the method described by Siebers et al. based on Bragg-Gray cavity theory to convert MC simulated dose to medium to dose to water. Results show a strong consistency between ECLIPSE and MC calculations on the beam axis.
Absolute dose calculations for Monte Carlo simulations of radiotherapy beams
NASA Astrophysics Data System (ADS)
Popescu, I. A.; Shaw, C. P.; Zavgorodni, S. F.; Beckham, W. A.
2005-07-01
Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 × 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 × 1013 ± 1.0% electrons incident on the target and a total dose of 20.87 cGy ± 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebe, J; Department of Physics and Astronomy, University of Calgary, Calgary, AB; Ploquin, N
2014-08-15
Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4more » (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.« less
An energy function for dynamics simulations of polypeptides in torsion angle space
NASA Astrophysics Data System (ADS)
Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.
1998-05-01
Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond
2016-04-15
Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less
Campbell, Bruce G.; Landmeyer, James E.
2014-01-01
Chesterfield County is located in the northeastern part of South Carolina along the southern border of North Carolina and is primarily underlain by unconsolidated sediments of Late Cretaceous age and younger of the Atlantic Coastal Plain. Approximately 20 percent of Chesterfield County is in the Piedmont Physiographic Province, and this area of the county is not included in this study. These Atlantic Coastal Plain sediments compose two productive aquifers: the Crouch Branch aquifer that is present at land surface across most of the county and the deeper, semi-confined McQueen Branch aquifer. Most of the potable water supplied to residents of Chesterfield County is produced from the Crouch Branch and McQueen Branch aquifers by a well field located near McBee, South Carolina, in the southwestern part of the county. Overall, groundwater availability is good to very good in most of Chesterfield County, especially the area around and to the south of McBee, South Carolina. The eastern part of Chesterfield County does not have as abundant groundwater resources but resources are generally adequate for domestic purposes. The primary purpose of this study was to determine groundwater-flow rates, flow directions, and changes in water budgets over time for the Crouch Branch and McQueen Branch aquifers in the Chesterfield County area. This goal was accomplished by using the U.S. Geological Survey finite-difference MODFLOW groundwater-flow code to construct and calibrate a groundwater-flow model of the Atlantic Coastal Plain of Chesterfield County. The model was created with a uniform grid size of 300 by 300 feet to facilitate a more accurate simulation of groundwater-surface-water interactions. The model consists of 617 rows from north to south extending about 35 miles and 884 columns from west to east extending about 50 miles, yielding a total area of about 1,750 square miles. However, the active part of the modeled area, or the part where groundwater flow is simulated, totaled about 1,117 square miles. Major types of data used as input to the model included groundwater levels, groundwater-use data, and hydrostratigraphic data, along with estimates and measurements of stream base flows made specifically for this study. The groundwater-flow model was calibrated to groundwater-level and stream base-flow conditions from 1900 to 2012 using 39 stress periods. The model was calibrated with an automated parameter-estimation approach using the computer program PEST, and the model used regularized inversion and pilot points. The groundwater-flow model was calibrated using field data that included groundwater levels that had been collected between 1940 and 2012 from 239 wells and base-flow measurements from 44 locations distributed within the study area. To better understand recharge and inter-aquifer interactions, seven wells were equipped with continuous groundwater-level recording equipment during the course of the study, between 2008 and 2012. These water levels were included in the model calibration process. The observed groundwater levels were compared to the simulated ones, and acceptable calibration fits were achieved. Root mean square error for the simulated groundwater levels compared to all observed groundwater levels was 9.3 feet for the Crouch Branch aquifer and 8.6 feet for the McQueen Branch aquifer. The calibrated groundwater-flow model was then used to calculate groundwater budgets for the entire study area and for two sub-areas. The sub-areas are the Alligator Rural Water and Sewer Company well field near McBee, South Carolina, and the Carolina Sandhills National Wildlife Refuge acquisition boundary area. For the overall model area, recharge rates vary from 56 to 1,679 million gallons per day (Mgal/d) with a mean of 737 Mgal/d over the simulation period (1900–2012). The simulated water budget for the streams and rivers varies from 653 to 1,127 Mgal/d with a mean of 944 Mgal/d. The simulated “storage-in term” ranges from 0 to 565 Mgal/d with a mean of 276 Mgal/d. The simulated “storage-out term” has a range of 0 to 552 Mgal/d with a mean of 77 Mgal/d. Groundwater budgets for the McBee, South Carolina, area and the Carolina Sandhills National Wildlife Refuge acquisition area had similar results. An analysis of the effects of past and current groundwater withdrawals on base flows in the McBee area indicated a negligible effect of pumping from the Alligator Rural Water and Sewer well field on local stream base flows. Simulate base flows for 2012 for selected streams in and around the McBee area were similar with and without simulated groundwater withdrawals from the well field. Removing all pumping from the model for the entire simulation period (1900–2012) produces a negligible difference in increased base flow for the selected streams. The 2012 flow for Lower Alligator Creek was 5.04 Mgal/d with the wells pumping and 5.08 Mgal/d without the wells pumping; this represents the largest difference in simulated flows for the six streams.
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
NASA Astrophysics Data System (ADS)
Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh
2016-09-01
Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.
Full-orbit and backward Monte Carlo simulation of runaway electrons
NASA Astrophysics Data System (ADS)
Del-Castillo-Negrete, Diego
2017-10-01
High-energy relativistic runaway electrons (RE) can be produced during magnetic disruptions due to electric fields generated during the thermal and current quench of the plasma. Understanding this problem is key for the safe operation of ITER because, if not avoided or mitigated, RE can severely damage the plasma facing components. In this presentation we report on RE simulation efforts centered in two complementary approaches: (i) Full orbit (6-D phase space) relativistic numerical simulations in general (integrable or chaotic) 3-D magnetic and electric fields, including radiation damping and collisions, using the recently developed particle-based Kinetic Orbit Runaway electron Code (KORC) and (ii) Backward Monte-Carlo (MC) simulations based on a recently developed efficient backward stochastic differential equations (BSDE) solver. Following a description of the corresponding numerical methods, we present applications to: (i) RE synchrotron radiation (SR) emission using KORC and (ii) Computation of time-dependent runaway probability distributions, RE production rates, and expected slowing-down and runaway times using BSDE. We study the dependence of these statistical observables on the electric and magnetic field, and the ion effective charge. SR is a key energy dissipation mechanism in the high-energy regime, and it is also extensively used as an experimental diagnostic of RE. Using KORC we study full orbit effects on SR and discuss a recently developed SR synthetic diagnostic that incorporates the full angular dependence of SR, and the location and basic optics of the camera. It is shown that oversimplifying the angular dependence of SR and/or ignoring orbit effects can significantly modify the shape and overestimate the amplitude of the spectra. Applications to DIII-D RE experiments are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhou, Xiaoqing; Qin, Zhuanping; Zhao, Huijuan
2011-02-01
This article aims at the development of the fast inverse Monte Carlo (MC) simulation for the reconstruction of optical properties (absorption coefficient μs and scattering coefficient μs) of cylindrical tissue, such as a cervix, from the measurement of near infrared diffuse light on frequency domain. Frequency domain information (amplitude and phase) is extracted from the time domain MC with a modified method. To shorten the computation time in reconstruction of optical properties, efficient and fast forward MC has to be achieved. To do this, firstly, databases of the frequency-domain information under a range of μa and μs were pre-built by combining MC simulation with Lambert-Beer's law. Then, a double polynomial model was adopted to quickly obtain the frequency-domain information in any optical properties. Based on the fast forward MC, the optical properties can be quickly obtained in a nonlinear optimization scheme. Reconstruction resulting from simulated data showed that the developed inverse MC method has the advantages in both the reconstruction accuracy and computation time. The relative errors in reconstruction of the μs and μs are less than +/-6% and +/-12% respectively, while another coefficient (μs or μs) is in a fixed value. When both μs and μs are unknown, the relative errors in reconstruction of the reduced scattering coefficient and absorption coefficient are mainly less than +/-10% in range of 45< μs <80 cm-1 and 0.25< a μ <0.55 cm-1. With the rapid reconstruction strategy developed in this article the computation time for reconstructing one set of the optical properties is less than 0.5 second. Endoscopic measurement on two tubular solid phantoms were also carried out to evaluate the system and the inversion scheme. The results demonstrated that less than 20% relative error can be achieved.
kmos: A lattice kinetic Monte Carlo framework
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten
2014-07-01
Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.
SUPERNOVA DRIVING. I. THE ORIGIN OF MOLECULAR CLOUD TURBULENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padoan, Paolo; Pan, Liubin; Haugbølle, Troels
2016-05-01
Turbulence is ubiquitous in molecular clouds (MCs), but its origin is still unclear because MCs are usually assumed to live longer than the turbulence dissipation time. Interstellar medium (ISM) turbulence is likely driven by supernova (SN) explosions, but it has never been demonstrated that SN explosions can establish and maintain a turbulent cascade inside MCs consistent with the observations. In this work, we carry out a simulation of SN-driven turbulence in a volume of (250 pc){sup 3}, specifically designed to test if SN driving alone can be responsible for the observed turbulence inside MCs. We find that SN driving establishesmore » a velocity scaling consistent with the usual scaling laws of supersonic turbulence, suggesting that previous idealized simulations of MC turbulence, driven with a random, large-scale volume force, were correctly adopted as appropriate models for MC turbulence, despite the artificial driving. We also find that the same scaling laws extend to the interiors of MCs, and that the velocity–size relation of the MCs selected from our simulation is consistent with that of MCs from the Outer-Galaxy Survey, the largest MC sample available. The mass–size relation and the mass and size probability distributions also compare successfully with those of the Outer Galaxy Survey. Finally, we show that MC turbulence is super-Alfvénic with respect to both the mean and rms magnetic-field strength. We conclude that MC structure and dynamics are the natural result of SN-driven turbulence.« less
Scaling up watershed model parameters--Flow and load simulations of the Edisto River Basin
Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul
2014-01-01
The Edisto River is the longest and largest river system completely contained in South Carolina and is one of the longest free flowing blackwater rivers in the United States. The Edisto River basin also has fish-tissue mercury concentrations that are some of the highest recorded in the United States. As part of an effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River basin, analyses and simulations of the hydrology of the Edisto River basin were made with the topography-based hydrological model (TOPMODEL). The potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River basin, was assessed. Scaling up was done in a step-wise process beginning with applying the calibration parameters, meteorological data, and topographic wetness index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made with subsequent simulations culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River basin and updated calibration parameters for some of the TOPMODEL calibration parameters. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the two models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the significant difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variables in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD-H, and LOADEST. Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek basin were used in the water-quality load models.
Amoush, Ahmad; Wilkinson, Douglas A.
2015-01-01
This work is a comparative study of the dosimetry calculated by Plaque Simulator, a treatment planning system for eye plaque brachytherapy, to the dosimetry calculated using Monte Carlo simulation for an Eye Physics model EP917 eye plaque. Monte Carlo (MC) simulation using MCNPX 2.7 was used to calculate the central axis dose in water for an EP917 eye plaque fully loaded with 17 IsoAid Advantage 125I seeds. In addition, the dosimetry parameters Λ, gL(r), and F(r,θ) were calculated for the IsoAid Advantage model IAI‐125 125I seed and benchmarked against published data. Bebig Plaque Simulator (PS) v5.74 was used to calculate the central axis dose based on the AAPM Updated Task Group 43 (TG‐43U1) dose formalism. The calculated central axis dose from MC and PS was then compared. When the MC dosimetry parameters for the IsoAid Advantage 125I seed were compared with the consensus values, Λ agreed with the consensus value to within 2.3%. However, much larger differences were found between MC calculated gL(r) and F(r,θ) and the consensus values. The differences between MC‐calculated dosimetry parameters are much smaller when compared with recently published data. The differences between the calculated central axis absolute dose from MC and PS ranged from 5% to 10% for distances between 1 and 12 mm from the outer scleral surface. When the dosimetry parameters for the 125I seed from this study were used in PS, the calculated absolute central axis dose differences were reduced by 2.3% from depths of 4 to 12 mm from the outer scleral surface. We conclude that PS adequately models the central dose profile of this plaque using its defaults for the IsoAid model IAI‐125 at distances of 1 to 7 mm from the outer scleral surface. However, improved dose accuracy can be obtained by using updated dosimetry parameters for the IsoAid model IAI‐125 125I seed. PACS number: 87.55.K‐ PMID:26699577
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y; Cai, J; Meltsner, S
2016-06-15
Purpose: The Varian tandem and ring applicators are used to deliver HDR Ir-192 brachytherapy for cervical cancer. The source path within the ring is hard to predict due to the larger interior ring lumen. Some studies showed the source could be several millimeters different from planned positions, while other studies demonstrated minimal dosimetric impact. A global shift can be applied to limit the effect of positioning offsets. The purpose of this study was to assess the necessities of implementing a global source shift using Monte Carlo (MC) simulations. Methods: The MCNP5 radiation transport code was used for all MC simulations.more » To accommodate TG-186 guidelines and eliminate inter-source attenuation, a BrachyVision plan with 10 dwell positions (0.5cm step sizes) was simulated as the summation of 10 individual sources with equal dwell times for simplification. To simplify the study, the tandem was also excluded from the MC model. Global shifts of ±0.1, ±0.3, ±0.5 cm were then simulated as distal and proximal from the reference positions. Dose was scored in water for all MC simulations and was normalized to 100% at the normalization point 0.5 cm from the cap in the ring plane. For dose comparison, Point A was 2 cm caudal from the buildup cap and 2 cm lateral on either side of the ring axis. With seventy simulations, 108 photon histories gave a statistical uncertainties (k=1) <2% for (0.1 cm)3 voxels. Results: Compared to no global shift, average Point A doses were 0.0%, 0.4%, and 2.2% higher for distal global shifts, and 0.4%, 2.8%, and 5.1% higher for proximal global shifts, respectively. The MC Point A doses differed by < 1% when compared to BrachyVision. Conclusion: Dose variations were not substantial for ±0.3 cm global shifts, which is common in clinical practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cawkwell, Marc Jon
2016-09-09
The MC3 code is used to perform Monte Carlo simulations in the isothermal-isobaric ensemble (constant number of particles, temperature, and pressure) on molecular crystals. The molecules within the periodic simulation cell are treated as rigid bodies, alleviating the requirement for a complex interatomic potential. Intermolecular interactions are described using generic, atom-centered pair potentials whose parameterization is taken from the literature [D. E. Williams, J. Comput. Chem., 22, 1154 (2001)] and electrostatic interactions arising from atom-centered, fixed, point partial charges. The primary uses of the MC3 code are the computation of i) the temperature and pressure dependence of lattice parameters andmore » thermal expansion coefficients, ii) tensors of elastic constants and compliances via the Parrinello and Rahman’s fluctuation formula [M. Parrinello and A. Rahman, J. Chem. Phys., 76, 2662 (1982)], and iii) the investigation of polymorphic phase transformations. The MC3 code is written in Fortran90 and requires LAPACK and BLAS linear algebra libraries to be linked during compilation. Computationally expensive loops are accelerated using OpenMP.« less
NASA Astrophysics Data System (ADS)
Chen, Zhe; Kecskes, Laszlo J.; Zhu, Kaigui; Wei, Qiuming
2016-12-01
Uniaxial tensile properties of monocrystalline tungsten (MC-W) and nanocrystalline tungsten (NC-W) with embedded hydrogen and helium atoms have been investigated using molecular dynamics (MD) simulations in the context of radiation damage evolution. Different strain rates have been imposed to investigate the strain rate sensitivity (SRS) of the samples. Results show that the plastic deformation processes of MC-W and NC-W are dominated by different mechanisms, namely dislocation-based for MC-W and grain boundary-based activities for NC-W, respectively. For MC-W, the SRS increases and a transition appears in the deformation mechanism with increasing embedded atom concentration. However, no obvious embedded atom concentration dependence of the SRS has been observed for NC-W. Instead, in the latter case, the embedded atoms facilitate GB sliding and intergranular fracture. Additionally, a strong strain enhanced He cluster growth has been observed. The corresponding underlying mechanisms are discussed.
Application of MC1 to Wind Cave National Park: Lessons from a small-scale study: Chapter 8
King, David A.; Bachelet, Dominique M.; Symstad, Amy J.
2015-01-01
MC1 was designed for application to large regions that include a wide range in elevation and topography, thereby encompassing a broad range in climates and vegetation types. The authors applied the dynamic global vegetation model MC1 to Wind Cave National Park (WCNP) in the southern Black Hills of South Dakota, USA, on the ecotone between ponderosa pine forest to the northwest and mixed-grass prairie to the southeast. They calibrated MC1 to simulate adequate fire effects in the warmer southeastern parts of the park to ensure grasslands there, while allowing forests to grow to the northwest, and then simulated future vegetation with climate projections from three GCMs. The results suggest that fire frequency, as affected by climate and/or human intervention, may be more important than the direct effects of climate in determining the distribution of ponderosa pine in the Black Hills region, both historically and in the future.
Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A
2014-01-01
The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2017-02-01
Current report considers development of a unified Monte Carlo (MC) -based computational model for simulation of propagation of Laguerre-Gaussian (LG) beams in turbid tissue-like scattering medium. With a primary goal to proof the concept of using complex light for tissue diagnosis we explore propagation of LG beams in comparison with Gaussian beams for both linear and circular polarization. MC simulations of radially and azimuthally polarized LG beams in turbid media have been performed, classic phenomena such as preservation of the orbital angular momentum, optical memory and helicity flip are observed, detailed comparison is presented and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hoyoung; Korea Institute of Materials Science, 797 Changwon-daero, Seongsan-gu, Changwon, Gyeongnam 642-831; Kang, Jun-Yun, E-mail: firice@kims.re.kr
This study aimed to present the complete history of carbide evolution in a cold-work tool steel along its full processing route for fabrication and application. A sequence of processes from cast to final hardening heat treatment was conducted on an 8% Cr-steel to reproduce a typical commercial processing route in a small scale. The carbides found at each process step were then identified by electron diffraction with energy dispersive spectroscopy in a scanning or transmission electron microscope. After solidification, MC, M{sub 7}C{sub 3} and M{sub 2}C carbides were identified and the last one dissolved during hot compression at 1180 °C.more » In a subsequent annealing at 870 °C followed by slow cooling, M{sub 6}C and M{sub 23}C{sub 6} were added, while they were dissolved in the following austenitization at 1030 °C. After the final tempering at 520 °C, fine M{sub 23}C{sub 6} precipitated again, thus the final microstructure was the tempered martensite with MC, M{sub 7}C{sub 3} and M{sub 23}C{sub 6} carbide. The transient M{sub 2}C and M{sub 6}C originated from the segregation of Mo and finally disappeared due to attenuated segregation and the consequent thermodynamic instability. - Highlights: • The full processing route of a cold-work tool steel was simulated in a small scale. • The carbides in the tool steel were identified by chemical–crystallographic analyses. • MC, M{sub 7}C{sub 3}, M{sub 2}C, M{sub 6}C and M{sub 23}C{sub 6} carbides were found during the processing of the steel. • M{sub 2}C and M{sub 6}C finally disappeared due to thermodynamic instability.« less
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, A; Wu, Q; Sawkey, D
Purpose: DEAR is a radiation therapy technique utilizing synchronized motion of gantry and couch during delivery to optimize dose distribution homogeneity and penumbra for treatment of superficial disease. Dose calculation for DEAR is not yet supported by commercial TPSs. The purpose of this study is to demonstrate the feasibility of using a web-based Monte Carlo (MC) simulation tool (VirtuaLinac) to calculate dose distributions for a DEAR delivery. Methods: MC simulations were run through VirtuaLinac, which is based on the GEANT4 platform. VirtuaLinac utilizes detailed linac head geometry and material models, validated phase space files, and a voxelized phantom. The inputmore » was expanded to include an XML file for simulation of varying mechanical axes as a function of MU. A DEAR XML plan was generated and used in the MC simulation and delivered on a TrueBeam in Developer Mode. Radiographic film wrapped on a cylindrical phantom (12.5 cm radius) measured dose at a depth of 1.5 cm and compared to the simulation results. Results: A DEAR plan was simulated using an energy of 6 MeV and a 3×10 cm{sup 2} cut-out in a 15×15 cm{sup 2} applicator for a delivery of a 90° arc. The resulting data were found to provide qualitative and quantitative evidence that the simulation platform could be used as the basis for DEAR dose calculations. The resulting unwrapped 2D dose distributions agreed well in the cross-plane direction along the arc, with field sizes of 18.4 and 18.2 cm and penumbrae of 1.9 and 2.0 cm for measurements and simulations, respectively. Conclusion: Preliminary feasibility of a DEAR delivery using a web-based MC simulation platform has been demonstrated. This tool will benefit treatment planning for DEAR as a benchmark for developing other model based algorithms, allowing efficient optimization of trajectories, and quality assurance of plans without the need for extensive measurements.« less
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
The effect of linear spring number at side load of McPherson suspension in electric city car
NASA Astrophysics Data System (ADS)
Budi, Sigit Setijo; Suprihadi, Agus; Makhrojan, Agus; Ismail, Rifky; Jamari, J.
2017-01-01
The function of the spring suspension on Mc Pherson type is to control vehicle stability and increase ride convenience although having tendencies of side load presence. The purpose of this study is to obtain simulation results of Mc Pherson suspension spring in the electric city car by using the finite element method and determining the side load that appears on the spring suspension. This research is conducted in several stages; they are linear spring designing models with various spring coil and spring suspension modeling using FEM software. Suspension spring is compressed in the vertical direction (z-axis) and at the upper part of the suspension springs will be seen the force that arises towards the x, y, and z-axis to simulate the side load arising on the upper part of the spring. The results of FEM simulation that the side load on the spring toward the x and y-axis which the value gets close to zero is the most stable spring.
NASA Astrophysics Data System (ADS)
Ilyasov, Ildar K.; Prikhodko, Constantin V.; Nevorotin, Alexey J.
1995-01-01
Monte Carlo (MC) simulation model and the thermoindicative tissue phantom were applied for evaluation of a depth of tissue necrosis (DTN) as a result of quasi-cw copper vapor laser (578 nm) irradiation. It has been shown that incident light focusing angle is essential for DTN. In particular, there was a significant rise in DTN parallel to elevation of this angle up to +20 degree(s)C and +5 degree(s)C for both the MC simulation and tissue phantom models, respectively, with no further increase in the necrosis depth above these angles. It is to be noted that the relationship between focusing angles and DTN values was apparently stronger for the real target compared to the MC-derived hypothetical one. To what extent these date are applicable for medical practice can be evaluated in animal models which would simulate laser-assisted therapy for PWS or related dermatologic lesions with converged 578 nm laser beams.
BCA-kMC Hybrid Simulation for Hydrogen and Helium Implantation in Material under Plasma Irradiation
NASA Astrophysics Data System (ADS)
Kato, Shuichi; Ito, Atsushi; Sasao, Mamiko; Nakamura, Hiroaki; Wada, Motoi
2015-09-01
Ion implantation by plasma irradiation into materials achieves the very high concentration of impurity. The high concentration of impurity causes the deformation and the destruction of the material. This is the peculiar phenomena in the plasma-material interaction (PMI). The injection process of plasma particles are generally simulated by using the binary collision approximation (BCA) and the molecular dynamics (MD), while the diffusion of implanted atoms have been traditionally solved by the diffusion equation, in which the implanted atoms is replaced by the continuous concentration field. However, the diffusion equation has insufficient accuracy in the case of low concentration, and in the case of local high concentration such as the hydrogen blistering and the helium bubble. The above problem is overcome by kinetic Monte Carlo (kMC) which represents the diffusion of the implanted atoms as jumps on interstitial sites in a material. In this paper, we propose the new approach ``BCA-kMC hybrid simulation'' for the hydrogen and helium implantation under the plasma irradiation.
Magnetic Levitation of MC3T3 Osteoblast Cells as a Ground-Based Simulation of Microgravity
Kidder, Louis S.; Williams, Philip C.; Xu, Wayne Wenzhong
2009-01-01
Diamagnetic samples placed in a strong magnetic field and a magnetic field gradient experience a magnetic force. Stable magnetic levitation occurs when the magnetic force exactly counter balances the gravitational force. Under this condition, a diamagnetic sample is in a simulated microgravity environment. The purpose of this study is to explore if MC3T3-E1 osteoblastic cells can be grown in magnetically simulated hypo-g and hyper-g environments and determine if gene expression is differentially expressed under these conditions. The murine calvarial osteoblastic cell line, MC3T3-E1, grown on Cytodex-3 beads, were subjected to a net gravitational force of 0, 1 and 2 g in a 17 T superconducting magnet for 2 days. Microarray analysis of these cells indicated that gravitational stress leads to up and down regulation of hundreds of genes. The methodology of sustaining long-term magnetic levitation of biological systems are discussed. PMID:20052306
LES of Temporally Evolving Mixing Layers by an Eighth-Order Filter Scheme
NASA Technical Reports Server (NTRS)
Hadjadj, A; Yee, H. C.; Sjogreen, B.
2011-01-01
An eighth-order filter method for a wide range of compressible flow speeds (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) are employed for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) and Reynolds numbers. The high order filter method is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The value of Mc considered is for the TML range from the quasi-incompressible regime to the highly compressible supersonic regime. The three main characteristics of compressible TML (the self similarity property, compressibility effects and the presence of large-scale structure with shocklets for high Mc) are considered for the LES study. The LES results using the same scheme parameters for all studied cases agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002).
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.
ERIC Educational Resources Information Center
Dombrowski, Stefan C.; McGill, Ryan J.; Canivez, Gary L.
2018-01-01
The Woodcock-Johnson (fourth edition; WJ IV; Schrank, McGrew, & Mather, 2014a) was recently redeveloped and retains its linkage to Cattell-Horn-Carroll theory (CHC). Independent reviews (e.g., Canivez, 2017) and investigations (Dombrowski, McGill, & Canivez, 2017) of the structure of the WJ IV full test battery and WJ IV Cognitive have…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.
The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DOmore » method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less
Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; ...
2017-10-03
The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.
The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less
NASA Astrophysics Data System (ADS)
Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; Dolence, Joshua; Sumiyoshi, Kohsuke; Yamada, Shoichi
2017-10-01
The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Huang, Aiqun; Bhattacharya, Aniket; Binder, Kurt
2015-03-01
In this talk we compare the results obtained from Monte Carlo (MC) and Brownian dynamics (BD) simulation for the universal properties of a semi-flexible chain. Specifically we compare MC results obtained using pruned-enriched Rosenbluth method (PERM) with those obtained from BD simulation. We find that the scaled plot of root-mean-square (RMS) end-to-end distance
Ion-mediated interactions in suspensions of oppositely charged nanoparticles
NASA Astrophysics Data System (ADS)
Dahirel, Vincent; Hansen, Jean Pierre
2009-08-01
The structure of oppositely charged spherical nanoparticles (polyions), dispersed in ionic solutions with continuous solvent (primitive model), is investigated by Monte Carlo (MC) simulations, within explicit and implicit microion representations, over a range of polyion valences and densities, and microion concentrations. Systems with explicit microions are explored by semigrand canonical MC simulations, and allow density-dependent effective polyion pair potentials vαβeff(r ) to be extracted from measured partial pair distribution functions. Implicit microion MC simulations are based on pair potentials of mean force vαβ(2)(r ) computed by explicit microion simulations of two charged polyions, in the low density limit. In the vicinity of the liquid-gas separation expected for oppositely charged polyions, the implicit microion representation leads to an instability against density fluctuations for polyion valences |Z| significantly below those at which the instability sets in within the exact explicit microion representation. Far from this instability region, the vαβ(2)(r ) are found to be fairly close to but consistently more repulsive than the effective pair potentials vαβeff(r ). This is corroborated by additional calculations of three-body forces between polyion triplets, which are repulsive when one polyion is of opposite charge to the other two. The explicit microion MC data were exploited to determine the ratio of salt concentrations c and co within the dispersion and the reservoir (Donnan effect). c /co is found to first increase before finally decreasing as a function of the polyion packing fraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
Three High Schools Revisited--Andrews, McPherson, and Nova. Profiles of Significant Schools.
ERIC Educational Resources Information Center
Kohn, Sherwood D.
Three schools--Nova High School in Fort Lauderdale, Florida, McPherson Senior High School in McPherson, Kansas, and Andrews Senior High School in Andrews, Texas--are examined in this report. All of them are considered advanced educational plants, and all have been in full operation for less than five years, but most of their innovational aspects…
NASA Astrophysics Data System (ADS)
El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.
2015-10-01
A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.
Development of a Multi-Channel Piezoelectric Acoustic Sensor Based on an Artificial Basilar Membrane
Jung, Youngdo; Kwak, Jun-Hyuk; Lee, Young Hwa; Kim, Wan Doo; Hur, Shin
2014-01-01
In this research, we have developed a multi-channel piezoelectric acoustic sensor (McPAS) that mimics the function of the natural basilar membrane capable of separating incoming acoustic signals mechanically by their frequency and generating corresponding electrical signals. The McPAS operates without an external energy source and signal processing unit with a vibrating piezoelectric thin film membrane. The shape of the vibrating membrane was chosen to be trapezoidal such that different locations of membrane have different local resonance frequencies. The length of the membrane is 28 mm and the width of the membrane varies from 1 mm to 8 mm. Multiphysics finite element analysis (FEA) was carried out to predict and design the mechanical behaviors and piezoelectric response of the McPAS model. The designed McPAS was fabricated with a MEMS fabrication process based on the simulated results. The fabricated device was tested with a mouth simulator to measure its mechanical and piezoelectrical frequency response with a laser Doppler vibrometer and acoustic signal analyzer. The experimental results show that the as fabricated McPAS can successfully separate incoming acoustic signals within the 2.5 kHz–13.5 kHz range and the maximum electrical signal output upon acoustic signal input of 94 dBSPL was 6.33 mVpp. The performance of the fabricated McPAS coincided well with the designed parameters. PMID:24361926
Dose and scatter characteristics of a novel cone beam CT system for musculoskeletal extremities
NASA Astrophysics Data System (ADS)
Zbijewski, W.; Sisniega, A.; Vaquero, J. J.; Muhit, A.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Carrino, J. A.; Siewerdsen, J. H.
2012-03-01
A novel cone-beam CT (CBCT) system has been developed with promising capabilities for musculoskeletal imaging (e.g., weight-bearing extremities and combined radiographic / volumetric imaging). The prototype system demonstrates diagnostic-quality imaging performance, while the compact geometry and short scan orbit raise new considerations for scatter management and dose characterization that challenge conventional methods. The compact geometry leads to elevated, heterogeneous x-ray scatter distributions - even for small anatomical sites (e.g., knee or wrist), and the short scan orbit results in a non-uniform dose distribution. These complex dose and scatter distributions were investigated via experimental measurements and GPU-accelerated Monte Carlo (MC) simulation. The combination provided a powerful basis for characterizing dose distributions in patient-specific anatomy, investigating the benefits of an antiscatter grid, and examining distinct contributions of coherent and incoherent scatter in artifact correction. Measurements with a 16 cm CTDI phantom show that the dose from the short-scan orbit (0.09 mGy/mAs at isocenter) varies from 0.16 to 0.05 mGy/mAs at various locations on the periphery (all obtained at 80 kVp). MC estimation agreed with dose measurements within 10-15%. Dose distribution in patient-specific anatomy was computed with MC, confirming such heterogeneity and highlighting the elevated energy deposition in bone (factor of ~5-10) compared to soft-tissue. Scatter-to-primary ratio (SPR) up to ~1.5-2 was evident in some regions of the knee. A 10:1 antiscatter grid was found earlier to result in significant improvement in soft-tissue imaging performance without increase in dose. The results of MC simulations elucidated the mechanism behind scatter reduction in the presence of a grid. A ~3-fold reduction in average SPR was found in the MC simulations; however, a linear grid was found to impart additional heterogeneity in the scatter distribution, mainly due to the increase in the contribution of coherent scatter with increased spatial variation. Scatter correction using MC-generated scatter distributions demonstrated significant improvement in cupping and streaks. Physical experimentation combined with GPU-accelerated MC simulation provided a sophisticated, yet practical approach in identifying low-dose acquisition techniques, optimizing scatter correction methods, and evaluating patientspecific dose.
NASA Astrophysics Data System (ADS)
Guo, Liwen
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, the simulation transport delay remains a problem. Because of the limitations shown in the three prominent existing delay compensators---the lead/lag filter, the McFarland compensator and the Sobiski/Cardullo predictor---new approaches of compensating the transport delay in a flight simulator have been developed. The first novel compensator is the adaptive predictor making use of the Kalman filter algorithm in a unique manner so that the predictor can provide accurately the desired amount of prediction, significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors it illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator's control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Piloted simulation tests were conducted for assessing the effectiveness of the two novel compensators in comparison to the McFarland predictor and no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. Four metrics---the glide slope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating on the handling qualities---were employed for the analyses. The overall analyses show that while the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator, the state space predictor is fairly superior for short delay and significantly superior for long delay to the McFarland compensator. The state space predictor also achieves better compensation than the adaptive predictor. The results of the evaluation on the effectiveness of these predictors in the piloted tests agree with those in the theoretical offline tests conducted with the recorded simulation aircraft states.
Development of Simulation Methods in the Gibbs Ensemble to Predict Polymer-Solvent Phase Equilibria
NASA Astrophysics Data System (ADS)
Gartner, Thomas; Epps, Thomas; Jayaraman, Arthi
Solvent vapor annealing (SVA) of polymer thin films is a promising method for post-deposition polymer film morphology control. The large number of important parameters relevant to SVA (polymer, solvent, and substrate chemistries, incoming film condition, annealing and solvent evaporation conditions) makes systematic experimental study of SVA a time-consuming endeavor, motivating the application of simulation and theory to the SVA system to provide both mechanistic insight and scans of this wide parameter space. However, to rigorously treat the phase equilibrium between polymer film and solvent vapor while still probing the dynamics of SVA, new simulation methods must be developed. In this presentation, we compare two methods to study polymer-solvent phase equilibrium-Gibbs Ensemble Molecular Dynamics (GEMD) and Hybrid Monte Carlo/Molecular Dynamics (Hybrid MC/MD). Liquid-vapor equilibrium results are presented for the Lennard Jones fluid and for coarse-grained polymer-solvent systems relevant to SVA. We found that the Hybrid MC/MD method is more stable and consistent than GEMD, but GEMD has significant advantages in computational efficiency. We propose that Hybrid MC/MD simulations be used for unfamiliar systems in certain choice conditions, followed by much faster GEMD simulations to map out the remainder of the phase window.
NASA Astrophysics Data System (ADS)
Jover, J.; Haslam, A. J.; Galindo, A.; Jackson, G.; Müller, E. A.
2012-10-01
We present a continuous pseudo-hard-sphere potential based on a cut-and-shifted Mie (generalized Lennard-Jones) potential with exponents (50, 49). Using this potential one can mimic the volumetric, structural, and dynamic properties of the discontinuous hard-sphere potential over the whole fluid range. The continuous pseudo potential has the advantage that it may be incorporated directly into off-the-shelf molecular-dynamics code, allowing the user to capitalise on existing hardware and software advances. Simulation results for the compressibility factor of the fluid and solid phases of our pseudo hard spheres are presented and compared both to the Carnahan-Starling equation of state of the fluid and published data, the differences being indistinguishable within simulation uncertainty. The specific form of the potential is employed to simulate flexible chains formed from these pseudo hard spheres at contact (pearl-necklace model) for mc = 4, 5, 7, 8, 16, 20, 100, 201, and 500 monomer segments. The compressibility factor of the chains per unit of monomer, mc, approaches a limiting value at reasonably small values, mc < 50, as predicted by Wertheim's first order thermodynamic perturbation theory. Simulation results are also presented for highly asymmetric mixtures of pseudo hard spheres, with diameter ratios of 3:1, 5:1, 20:1 over the whole composition range.
NASA Astrophysics Data System (ADS)
Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.
2015-06-01
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall accuracy error of less than 10% can now be used for further MC simulation applications such as development of hardware components as well as for testing of new PET/MR software algorithms, such as assessment of point-spread function-based reconstruction algorithms.
Chemistry of Titan's Aerosols : Correlation of The C/n &C/h Ratios To Pressure and Temperature
NASA Astrophysics Data System (ADS)
Bernard, J.-M.; Coll, P.; Raulin, F.
The gas present in Titan's atmosphere are forming organics aerosols under action of the solar radiations and of electrons from Saturn's magnetosphere. Many experimental simulations are been realised by irradiating N2/CH4 gas mixtures with different en- ergy sources in order to reproduce the chemistry of gas and particulate phases (Thomp- son et al, 1991; Mc Donald et al, 1994; de Vanssay et al, 1995; McKay, 1996; Coll et al, 1997, 1998a,b; and Refs. included). Until very recently, only one organics re- mains detected in Titan but not in laboratory simulation : C4N2. A full program of experimental research has been developed at LISA, which was able to provide a com- plete identification of a wide range of compounds, proposed to be present in Titan's atmosphere, including C4N2. The composition of aerosol on Titan is not known, due to its complexity. Especially its building molecules are difficult to identify. Only functional groups of analogues have been determined using spectroscopy and pyrolysis. However this chemical composi- tion is a key parameter for Cassini-Huygens experiments and atmospheric modeling : even the optical properties of aerosols are related to C/N and C/H ratios. We will present the results of the variation of C/N and C/H ratios according to the temperature and the pressure in Titan's atmosphere simulations. This data will allow to constraint photochemical models, in order for them to be more realistic. Then the comprehension of the mechanism of aerosols formation on Titan as function of altitude will be easier.
Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications
Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2016-01-01
We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 μm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 μm in x,y-plane and ~0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 μm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 μm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods. PMID:23079763
Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications.
Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2012-11-21
We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 µm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 µm in x,y-plane and ∼0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 µm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 µm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods.
John Glenn during preflight training for STS-95
1998-04-14
S98-06937 (28 April 1998) --- U.S. Sen. John H. Glenn Jr. (D.-Ohio), uses a device called a Sky genie to simulate rappelling from a troubled Space Shuttle during training at the Johnson Space Center (JSC). Glenn has been named as a payload specialist for STS-95, scheduled for launch later this year. This exercise, in the systems integration facility at JSC, trains the crewmembers for procedures to follow in egressing a troubled shuttle on the ground. The full fuselage trainer (FFT) is at left, with the crew compartment trainer (CCT) at right. Photo Credit: Joe McNally, National Geographic, for NASA
John Glenn during preflight training for STS-95
1998-04-14
S98-06938 (28 April 1998) --- U.S. Sen. John H. Glenn Jr. (D.-Ohio), uses a device called a Sky genie to simulate rappelling from a troubled Space Shuttle during training at the Johnson Space Center (JSC). Glenn has been named as a payload specialist for STS-95, scheduled for launch later this year. This exercise, in the systems integration facility at JSC, trains the crewmembers for procedures to follow in egressing a troubled shuttle on the ground. The full fuselage trainer (FFT) is at left, with the crew compartment trainer (CCT) at right. Photo Credit: Joe McNally, National Geographic, for NASA
Higo, Junichi; Umezawa, Koji
2014-01-01
We introduce computational studies on intrinsically disordered proteins (IDPs). Especially, we present our multicanonical molecular dynamics (McMD) simulations of two IDP-partner systems: NRSF-mSin3 and pKID-KIX. McMD is one of enhanced conformational sampling methods useful for conformational sampling of biomolecular systems. IDP adopts a specific tertiary structure upon binding to its partner molecule, although it is unstructured in the unbound state (i.e. the free state). This IDP-specific property is called "coupled folding and binding". The McMD simulation treats the biomolecules with an all-atom model immersed in an explicit solvent. In the initial configuration of simulation, IDP and its partner molecules are set to be distant from each other, and the IDP conformation is disordered. The computationally obtained free-energy landscape for coupled folding and binding has shown that native- and non-native-complex clusters distribute complicatedly in the conformational space. The all-atom simulation suggests that both of induced-folding and population-selection are coupled complicatedly in the coupled folding and binding. Further analyses have exemplified that the conformational fluctuations (dynamical flexibility) in the bound and unbound states are essentially important to characterize IDP functioning.
Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.
2016-01-01
This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.
SU-E-T-25: Real Time Simulator for Designing Electron Dual Scattering Foil Systems.
Carver, R; Hogstrom, K; Price, M; Leblanc, J; Harris, G
2012-06-01
To create a user friendly, accurate, real time computer simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator should allow for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator consists of an analytical algorithm for calculating electron fluence and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with a refined Moliere formalism for scattering powers. The simulator also estimates central-axis x-ray dose contamination from the dual foil system. Once the geometry of the beamline is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scattering foil material and Gaussian shape (thickness and sigma), and beam energy. The beam profile and x-ray contamination are displayed in real time. The simulator was tuned by comparison of off-axis electron fluence profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV and using present foils on the Elekta radiotherapy accelerator, the simulator profiles agreed to within 2% of MC profiles from within 20 cm of the central axis. The x-ray contamination predictions matched measured data to within 0.6%. The calculation time was approximately 100 ms using a single processor, which allows for real-time variation of foil parameters using sliding bars. A real time dual scattering foil system simulator has been developed. The tool has been useful in a project to redesign an electron dual scattering foil system for one of our radiotherapy accelerators. The simulator has also been useful as an instructional tool for our medical physics graduate students. © 2012 American Association of Physicists in Medicine.
Study on photon transport problem based on the platform of molecular optical simulation environment.
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.
Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737
Raman Monte Carlo simulation for light propagation for tissue with embedded objects
NASA Astrophysics Data System (ADS)
Periyasamy, Vijitha; Jaafar, Humaira Bte; Pramanik, Manojit
2018-02-01
Monte Carlo (MC) stimulation is one of the prominent simulation technique and is rapidly becoming the model of choice to study light-tissue interaction. Monte Carlo simulation for light transport in multi-layered tissue (MCML) is adapted and modelled with different geometry by integrating embedded objects of various shapes (i.e., sphere, cylinder, cuboid and ellipsoid) into the multi-layered structure. These geometries would be useful in providing a realistic tissue structure such as modelling for lymph nodes, tumors, blood vessels, head and other simulation medium. MC simulations were performed on various geometric medium. Simulation of MCML with embedded object (MCML-EO) was improvised for propagation of the photon in the defined medium with Raman scattering. The location of Raman photon generation is recorded. Simulations were experimented on a modelled breast tissue with tumor (spherical and ellipsoidal) and blood vessels (cylindrical). Results were presented in both A-line and B-line scans for embedded objects to determine spatial location where Raman photons were generated. Studies were done for different Raman probabilities.
Formalization and Validation of an SADT Specification Through Executable Simulation in VHDL
1991-12-01
be found in (39, 40, 41). One recent summary of the SADT methodology was written by Marca and McGowan in 1988 (.32). SADT is a methodology to provide...that is required. Also, the presence of "all" inputs and controls may not be needed for the activity to proceed. Marca and McGowan (32) describe a...diagrams which describe a complete system. Marca and McGowan define an SADT Model as: "a collection of carefully coorinated descriptions, starting from a
Jin, Lihui; Eldib, Ahmed; Li, Jinsheng; Emam, Ismail; Fan, Jiajin; Wang, Lu; Ma, C-M
2014-01-06
The dosimetric advantage of modulated electron radiotherapy (MERT) has been explored by many investigators and is considered to be an advanced radiation therapy technique in the utilization of electrons. A computer-controlled electron multileaf collimator (MLC) prototype, newly designed to be added onto a Varian linac to deliver MERT, was investigated both experimentally and by Monte Carlo simulations. Four different electron energies, 6, 9, 12, and 15 MeV, were employed for this investigation. To ensure that this device was capable of delivering the electron beams properly, measurements were performed to examine the electron MLC (eMLC) leaf leakage and to determine the appropriate jaw positioning for an eMLC-shaped field in order to eliminate a secondary radiation peak that could otherwise appear outside of an intended radiation field in the case of inappropriate jaw positioning due to insufficient radiation blockage from the jaws. Phase space data were obtained by Monte Carlo (MC) simulation and recorded at the plane just above the jaws for each of the energies (6, 9, 12, and 15 MeV). As an input source, phase space data were used in MC dose calculations for various sizes of the eMLC shaped field (10 × 10 cm2, 3.4 × 3.4 cm2, and 2 × 2 cm2) with respect to a water phantom at source-to-surface distance (SSD) = 94 cm, while the jaws, eMLC leaves, and some accessories associated with the eMLC assembly as well were modeled as modifiers in the calculations. The calculated results were then compared with measurements from a water scanning system. The results showed that jaw settings with 5 mm margins beyond the field shaped by the eMLC were appropriate to eliminate the secondary radiation peak while not widening the beam penumbra; the eMLC leaf leakage measurements ranged from 0.3% to 1.8% for different energies based on in-phantom measurements, which should be quite acceptable for MERT. Comparisons between MC dose calculations and measurements showed agreement within 1%/1 mm based on percentage depth doses (PDDs) and off-axis dose profiles for a range of field sizes for each of the electron energies. Our current work has demonstrated that the eMLC and other relevant components in the linac were correctly modeled and simulated via our in-house MC codes, and the eMLC is capable of accurately delivering electron beams for various eMLC-shaped field sizes with appropriate jaw settings. In the next stage, patient-specific verification with a full MERT plan should be performed.
Eldib, Ahmed; Li, Jinsheng; Emam, Ismail; Fan, Jiajin; Wang, Lu; Ma, C‐M
2014-01-01
The dosimetric advantage of modulated electron radiotherapy (MERT) has been explored by many investigators and is considered to be an advanced radiation therapy technique in the utilization of electrons. A computer‐controlled electron multileaf collimator (MLC) prototype, newly designed to be added onto a Varian linac to deliver MERT, was investigated both experimentally and by Monte Carlo simulations. Four different electron energies, 6, 9, 12, and 15 MeV, were employed for this investigation. To ensure that this device was capable of delivering the electron beams properly, measurements were performed to examine the electron MLC (eMLC) leaf leakage and to determine the appropriate jaw positioning for an eMLC‐shaped field in order to eliminate a secondary radiation peak that could otherwise appear outside of an intended radiation field in the case of inappropriate jaw positioning due to insufficient radiation blockage from the jaws. Phase space data were obtained by Monte Carlo (MC) simulation and recorded at the plane just above the jaws for each of the energies (6, 9, 12, and 15 MeV). As an input source, phase space data were used in MC dose calculations for various sizes of the eMLC shaped field (10×10 cm2, 3.4×3.4 cm2, and 2×2 cm2) with respect to a water phantom at source‐to‐surface distance (SSD)=94cm, while the jaws, eMLC leaves, and some accessories associated with the eMLC assembly as well were modeled as modifiers in the calculations. The calculated results were then compared with measurements from a water scanning system. The results showed that jaw settings with 5 mm margins beyond the field shaped by the eMLC were appropriate to eliminate the secondary radiation peak while not widening the beam penumbra; the eMLC leaf leakage measurements ranged from 0.3% to 1.8% for different energies based on in‐phantom measurements, which should be quite acceptable for MERT. Comparisons between MC dose calculations and measurements showed agreement within 1%/1mm based on percentage depth doses (PDDs) and off‐axis dose profiles for a range of field sizes for each of the electron energies. Our current work has demonstrated that the eMLC and other relevant components in the linac were correctly modeled and simulated via our in‐house MC codes, and the eMLC is capable of accurately delivering electron beams for various eMLC‐shaped field sizes with appropriate jaw settings. In the next stage, patient‐specific verification with a full MERT plan should be performed. PACS number: 87.55.ne PMID:24423848
Efficient Implementation of MrBayes on Multi-GPU
Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang
2013-01-01
MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)3), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)3 Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)3 (aMCMCMC) for MrBayes (MC)3 on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new “node-by-node” task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)3 achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)3 is dramatically faster than all the previous (MC)3 algorithms and scales well to large GPU clusters. PMID:23493260
Efficient implementation of MrBayes on multi-GPU.
Bao, Jie; Xia, Hongju; Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang
2013-06-01
MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)(3)), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)(3) Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new "node-by-node" task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.
The relationship between level of autistic traits and local bias in the context of the McGurk effect
Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio
2015-01-01
The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705
Hoefling, Martin; Lima, Nicola; Haenni, Dominik; Seidel, Claus A. M.; Schuler, Benjamin; Grubmüller, Helmut
2011-01-01
Förster Resonance Energy Transfer (FRET) experiments probe molecular distances via distance dependent energy transfer from an excited donor dye to an acceptor dye. Single molecule experiments not only probe average distances, but also distance distributions or even fluctuations, and thus provide a powerful tool to study biomolecular structure and dynamics. However, the measured energy transfer efficiency depends not only on the distance between the dyes, but also on their mutual orientation, which is typically inaccessible to experiments. Thus, assumptions on the orientation distributions and averages are usually made, limiting the accuracy of the distance distributions extracted from FRET experiments. Here, we demonstrate that by combining single molecule FRET experiments with the mutual dye orientation statistics obtained from Molecular Dynamics (MD) simulations, improved estimates of distances and distributions are obtained. From the simulated time-dependent mutual orientations, FRET efficiencies are calculated and the full statistics of individual photon absorption, energy transfer, and photon emission events is obtained from subsequent Monte Carlo (MC) simulations of the FRET kinetics. All recorded emission events are collected to bursts from which efficiency distributions are calculated in close resemblance to the actual FRET experiment, taking shot noise fully into account. Using polyproline chains with attached Alexa 488 and Alexa 594 dyes as a test system, we demonstrate the feasibility of this approach by direct comparison to experimental data. We identified cis-isomers and different static local environments as sources of the experimentally observed heterogeneity. Reconstructions of distance distributions from experimental data at different levels of theory demonstrate how the respective underlying assumptions and approximations affect the obtained accuracy. Our results show that dye fluctuations obtained from MD simulations, combined with MC single photon kinetics, provide a versatile tool to improve the accuracy of distance distributions that can be extracted from measured single molecule FRET efficiencies. PMID:21629703
Two schemes for quantitative photoacoustic tomography based on Monte Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yubin; Yuan, Zhen, E-mail: zhenyuan@umac.mo
Purpose: The aim of this study was to develop novel methods for photoacoustically determining the optical absorption coefficient of biological tissues using Monte Carlo (MC) simulation. Methods: In this study, the authors propose two quantitative photoacoustic tomography (PAT) methods for mapping the optical absorption coefficient. The reconstruction methods combine conventional PAT with MC simulation in a novel way to determine the optical absorption coefficient of biological tissues or organs. Specifically, the authors’ two schemes were theoretically and experimentally examined using simulations, tissue-mimicking phantoms, ex vivo, and in vivo tests. In particular, the authors explored these methods using several objects withmore » different absorption contrasts embedded in turbid media and by using high-absorption media when the diffusion approximation was not effective at describing the photon transport. Results: The simulations and experimental tests showed that the reconstructions were quantitatively accurate in terms of the locations, sizes, and optical properties of the targets. The positions of the recovered targets were accessed by the property profiles, where the authors discovered that the off center error was less than 0.1 mm for the circular target. Meanwhile, the sizes and quantitative optical properties of the targets were quantified by estimating the full width half maximum of the optical absorption property. Interestingly, for the reconstructed sizes, the authors discovered that the errors ranged from 0 for relatively small-size targets to 26% for relatively large-size targets whereas for the recovered optical properties, the errors ranged from 0% to 12.5% for different cases. Conclusions: The authors found that their methods can quantitatively reconstruct absorbing objects of different sizes and optical contrasts even when the diffusion approximation is unable to accurately describe the photon propagation in biological tissues. In particular, their methods are able to resolve the intrinsic difficulties that occur when quantitative PAT is conducted by combining conventional PAT with the diffusion approximation or with radiation transport modeling.« less
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
STS-31 MS McCandless and MS Sullivan during JSC WETF underwater simulation
1990-03-05
This overall view shows STS-31 Mission Specialist (MS) Bruce McCandless II (left) and MS Kathryn D. Sullivan making a practice space walk in JSC's Weightless Environment Training Facility (WETF) Bldg 29 pool. McCandless works with a mockup of the remote manipulator system (RMS) end effector which is attached to a grapple fixture on the Hubble Space Telescope (HST) mockup. Sullivan manipulates HST hardware on the Support System Module (SSM) forward shell. SCUBA-equipped divers monitor the extravehicular mobility unit (EMU) suited crewmembers during this simulated extravehicular activity (EVA). No EVA is planned for the Hubble Space Telescope (HST) deployment, but the duo has trained for contingencies which might arise during the STS-31 mission aboard Discovery, Orbiter Vehicle (OV) 103. Photo taken by NASA JSC photographer Sheri Dunnette.
STS-31 MS McCandless and MS Sullivan during JSC WETF underwater simulation
NASA Technical Reports Server (NTRS)
1990-01-01
This overall view shows STS-31 Mission Specialist (MS) Bruce McCandless II (left) and MS Kathryn D. Sullivan making a practice space walk in JSC's Weightless Environment Training Facility (WETF) Bldg 29 pool. McCandless works with a mockup of the remote manipulator system (RMS) end effector which is attached to a grapple fixture on the Hubble Space Telescope (HST) mockup. Sullivan manipulates HST hardware on the Support System Module (SSM) forward shell. SCUBA-equipped divers monitor the extravehicular mobility unit (EMU) suited crewmembers during this simulated extravehicular activity (EVA). No EVA is planned for the Hubble Space Telescope (HST) deployment, but the duo has trained for contingencies which might arise during the STS-31 mission aboard Discovery, Orbiter Vehicle (OV) 103. Photo taken by NASA JSC photographer Sheri Dunnette.
NASA Astrophysics Data System (ADS)
Drapek, R. J.; Kim, J. B.
2013-12-01
We simulated ecosystem response to climate change in the USA and Canada at a 5 arc-minute grid resolution using the MC1 dynamic global vegetation model and nine CMIP3 future climate projections as input. The climate projections were produced by 3 GCMs simulating 3 SRES emissions scenarios. We examined MC1 outputs for the conterminous USA by summarizing them by EPA level II and III ecoregions to characterize model skill and evaluate the magnitude and uncertainties of simulated ecosystem response to climate change. First, we evaluated model skill by comparing outputs from the recent historical period with benchmark datasets. Distribution of potential natural vegetation simulated by MC1 was compared with Kuchler's map. Above ground live carbon simulated by MC1 was compared with the National Biomass and Carbon Dataset. Fire return intervals calculated by MC1 were compared with maximum and minimum values compiled for the United States. Each EPA Level III Ecoregion was scored for average agreement with corresponding benchmark data and an average score was calculated for all three types of output. Greatest agreement with benchmark data happened in the Western Cordillera, the Ozark / Ouachita-Appalachian Forests, and the Southeastern USA Plains (EPA Level II Ecoregions). The lowest agreement happened in the Everglades and the Tamaulipas-Texas Semiarid Plain. For simulated ecosystem response to future climate projections we examined MC1 output for shifts in vegetation type, vegetation carbon, runoff, and biomass consumed by fire. Each ecoregion was scored for the amount of change from historical conditions for each variable and an average score was calculated. Smallest changes were forecast for Western Cordillera and Marine West Coast Forest ecosystems. Largest changes were forecast for the Cold Deserts, the Mixed Wood Plains, and the Central USA Plains. By combining scores of model skill for the historical period for each EPA Level 3 Ecoregion with scores representing the magnitude of ecosystem changes in the future, we identified high and low uncertainty ecoregions. The largest anticipated changes and the lowest measures of model skill coincide in the Central USA Plains and the Mixed Wood Plains. The combination of low model skill and high degree of ecosystem change elevate the importance of our uncertainty in this ecoregion. The highest projected changes coincide with relatively high model skill in the Cold Deserts. Climate adaptation efforts are the most likely to pay off in these regions. Finally, highest model skill and lowest anticipated changes coincide in the Western Cordillera and the Marine West Coast Forests. These regions may be relatively low-risk for climate change impacts when compared to the other ecoregions. These results represent only the first step in this type of analysis; there exist many ways to strengthen it. One, MC1 calibrations can be optimized using a structured optimization technique. Two, a larger set of climate projections can be used to capture a fuller range of GCMs and emissions scenarios. And three, employing an ensemble of vegetation models would make the analysis more robust.
NASA Astrophysics Data System (ADS)
Kerns, B. K.; Kim, J. B.; Day, M. A.; Pitts, B.; Drapek, R. J.
2017-12-01
Ecosystem process models are increasingly being used in regional assessments to explore potential changes in future vegetation and NPP due to climate change. We use the dynamic global vegetation model MAPSS-Century 2 (MC2) as one line of evidence for regional climate change vulnerability assessments for the US Forest Service, focusing our fine tuning model calibration from observational sources related to forest vegetation. However, there is much interest in understanding projected changes for arid rangelands in the western US such as grasslands, shrublands, and woodlands. Rangelands provide many ecosystem service benefits and local rural human community sustainability, habitat for threatened and endangered species, and are threatened by annual grass invasion. Past work suggested MC2 performance related to arid rangeland plant functional types (PFT's) was poor, and the model has difficulty distinguishing annual versus perennial grasslands. Our objectives are to increase the model performance for rangeland simulations and explore the potential for splitting the grass plant functional type into annual and perennial. We used the tri-state Blue Mountain Ecoregion as our study area and maps of potential vegetation from interpolated ground data, the National Land Cover Data Database, and ancillary NPP data derived from the MODIS satellite. MC2 historical simulations for the area overestimated woodland occurrence and underestimated shrubland and grassland PFT's. The spatial location of the rangeland PFT's also often did not align well with observational data. While some disagreement may be due to differences in the respective classification rules, the errors are largely linked to MC2's tree and grass biogeography and physiology algorithms. Presently, only grass and forest productivity measures and carbon stocks are used to distinguish PFT's. MC2 grass and tree productivity simulation is problematic, in particular grass seasonal phenology in relation to seasonal patterns of temperature and precipitation. The algorithm also does not accurately translate simulated carbon stocks into the canopy allometry of woodland tree species that dominate the BME, thereby inaccurately shading out the grasses in the understory. We are devising improvements to these shortcomings in the model architecture.
SU-E-T-155: Calibration of Variable Longitudinal Strength 103Pd Brachytherapy Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, J; Radtke, J; Micka, J
Purpose: Brachytherapy sources with variable longitudinal strength (VLS) allow for a customized intensity along the length of the source. These have applications in focal brachytherapy treatments of prostate cancer where dose boosting can be achieved through modulation of intra-source strengths. This work focused on development of a calibration methodology for VLS sources based on measurements and Monte Carlo (MC) simulations of five 1 cm {sup 10} {sup 3}Pd sources each containing four regions of variable {sup 103}Pd strength. Methods: The air-kerma strengths of the sources were measured with a variable-aperture free-air chamber (VAFAC). Source strengths were also measured using amore » well chamber. The in-air azimuthal and polar anisotropy of the sources were measured by rotating them in front of a NaI scintillation detector and were calculated with MC simulations. Azimuthal anisotropy results were normalized to their mean intensity values. Polar anisotropy results were normalized to their average transverse axis intensity values. The relative longitudinal strengths of the sources were measured via on-contact irradiations with radiochromic film, and were calculated with MC simulations. Results: The variable {sup 103}Pd loading of the sources was validated by VAFAC and well chamber measurements. Ratios of VAFAC air-kerma strengths and well chamber responses were within ±1.3% for all sources. Azimuthal anisotropy results indicated that ≥95% of the normalized values for all sources were within ±1.7% of the mean values. Polar anisotropy results indicated variations within ±0.3% for a ±7.6° angular region with respect to the source transverse axis. Locations and intensities of the {sup 103}Pd regions were validated by radiochromic film measurements and MC simulations. Conclusion: The calibration methodology developed in this work confirms that the VLS sources investigated have a high level of polar uniformity, and that the strength and longitudinal intensity can be verified experimentally and through MC simulations. {sup 103}Pd sources were provided by CivaTech Oncology, Inc.« less
A fragment-based approach to the SAMPL3 Challenge
NASA Astrophysics Data System (ADS)
Kulp, John L.; Blumenthal, Seth N.; Wang, Qiang; Bryan, Richard L.; Guarnieri, Frank
2012-05-01
The success of molecular fragment-based design depends critically on the ability to make predictions of binding poses and of affinity ranking for compounds assembled by linking fragments. The SAMPL3 Challenge provides a unique opportunity to evaluate the performance of a state-of-the-art fragment-based design methodology with respect to these requirements. In this article, we present results derived from linking fragments to predict affinity and pose in the SAMPL3 Challenge. The goal is to demonstrate how incorporating different aspects of modeling protein-ligand interactions impact the accuracy of the predictions, including protein dielectric models, charged versus neutral ligands, ΔΔGs solvation energies, and induced conformational stress. The core method is based on annealing of chemical potential in a Grand Canonical Monte Carlo (GC/MC) simulation. By imposing an initially very high chemical potential and then automatically running a sequence of simulations at successively decreasing chemical potentials, the GC/MC simulation efficiently discovers statistical distributions of bound fragment locations and orientations not found reliably without the annealing. This method accounts for configurational entropy, the role of bound water molecules, and results in a prediction of all the locations on the protein that have any affinity for the fragment. Disregarding any of these factors in affinity-rank prediction leads to significantly worse correlation with experimentally-determined free energies of binding. We relate three important conclusions from this challenge as applied to GC/MC: (1) modeling neutral ligands—regardless of the charged state in the active site—produced better affinity ranking than using charged ligands, although, in both cases, the poses were almost exactly overlaid; (2) simulating explicit water molecules in the GC/MC gave better affinity and pose predictions; and (3) applying a ΔΔGs solvation correction further improved the ranking of the neutral ligands. Using the GC/MC method under a variety of parameters in the blinded SAMPL3 Challenge provided important insights to the relevant parameters and boundaries in predicting binding affinities using simulated annealing of chemical potential calculations.
Simulation and analysis of a proposed replacement for the McCook port of entry inspection station
DOT National Transportation Integrated Search
1999-04-01
This report describes a study of a proposed replacement for the McCook Port of Entry inspection station at the entry to South Dakota. In order to assess the potential for a low-speed weigh in motion (WIM) scale within the station to pre-screen trucks...
Using Computer-Based "Experiments" in the Analysis of Chemical Reaction Equilibria
ERIC Educational Resources Information Center
Li, Zhao; Corti, David S.
2018-01-01
The application of the Reaction Monte Carlo (RxMC) algorithm to standard textbook problems in chemical reaction equilibria is discussed. The RxMC method is a molecular simulation algorithm for studying the equilibrium properties of reactive systems, and therefore provides the opportunity to develop computer-based "experiments" for the…
HEP Software Foundation Community White Paper Working Group - Detector Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apostolakis, J.
A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main componentsmore » of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.« less
Simulation of streamflow in the McTier Creek watershed, South Carolina
Feaster, Toby D.; Golden, Heather E.; Odom, Kenneth R.; Lowery, Mark A.; Conrads, Paul; Bradley, Paul M.
2010-01-01
The McTier Creek watershed is located in the Sand Hills ecoregion of South Carolina and is a small catchment within the Edisto River Basin. Two watershed hydrology models were applied to the McTier Creek watershed as part of a larger scientific investigation to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin. The two models are the topography-based hydrological model (TOPMODEL) and the grid-based mercury model (GBMM). TOPMODEL uses the variable-source area concept for simulating streamflow, and GBMM uses a spatially explicit modified curve-number approach for simulating streamflow. The hydrologic output from TOPMODEL can be used explicitly to simulate the transport of mercury in separate applications, whereas the hydrology output from GBMM is used implicitly in the simulation of mercury fate and transport in GBMM. The modeling efforts were a collaboration between the U.S. Geological Survey and the U.S. Environmental Protection Agency, National Exposure Research Laboratory. Calibrations of TOPMODEL and GBMM were done independently while using the same meteorological data and the same period of record of observed data. Two U.S. Geological Survey streamflow-gaging stations were available for comparison of observed daily mean flow with simulated daily mean flow-station 02172300, McTier Creek near Monetta, South Carolina, and station 02172305, McTier Creek near New Holland, South Carolina. The period of record at the Monetta gage covers a broad range of hydrologic conditions, including a drought and a significant wet period. Calibrating the models under these extreme conditions along with the normal flow conditions included in the record enhances the robustness of the two models. Several quantitative assessments of the goodness of fit between model simulations and the observed daily mean flows were done. These included the Nash-Sutcliffe coefficient of model-fit efficiency index, Pearson's correlation coefficient, the root mean square error, the bias, and the mean absolute error. In addition, a number of graphical tools were used to assess how well the models captured the characteristics of the observed data at the Monetta and New Holland streamflow-gaging stations. The graphical tools included temporal plots of simulated and observed daily mean flows, flow-duration curves, single-mass curves, and various residual plots. The results indicated that TOPMODEL and GBMM generally produced simulations that reasonably capture the quantity, variability, and timing of the observed streamflow. For the periods modeled, the total volume of simulated daily mean flows as compared to the total volume of the observed daily mean flow from TOPMODEL was within 1 to 5 percent, and the total volume from GBMM was within 1 to 10 percent. A noticeable characteristic of the simulated hydrographs from both models is the complexity of balancing groundwater recession and flow at the streamgage when flows peak and recede rapidly. However, GBMM results indicate that groundwater recession, which affects the receding limb of the hydrograph, was more difficult to estimate with the spatially explicit curve number approach. Although the purpose of this report is not to directly compare both models, given the characteristics of the McTier Creek watershed and the fact that GBMM uses the spatially explicit curve number approach as compared to the variable-source-area concept in TOPMODEL, GBMM was able to capture the flow characteristics reasonably well.
Correction for human head motion in helical x-ray CT
NASA Astrophysics Data System (ADS)
Kim, J.-H.; Sun, T.; Alcheikh, A. R.; Kuncic, Z.; Nuyts, J.; Fulton, R.
2016-02-01
Correction for rigid object motion in helical CT can be achieved by reconstructing from a modified source-detector orbit, determined by the object motion during the scan. This ensures that all projections are consistent, but it does not guarantee that the projections are complete in the sense of being sufficient for exact reconstruction. We have previously shown with phantom measurements that motion-corrected helical CT scans can suffer from data-insufficiency, in particular for severe motions and at high pitch. To study whether such data-insufficiency artefacts could also affect the motion-corrected CT images of patients undergoing head CT scans, we used an optical motion tracking system to record the head movements of 10 healthy volunteers while they executed each of the 4 different types of motion (‘no’, slight, moderate and severe) for 60 s. From these data we simulated 354 motion-affected CT scans of a voxelized human head phantom and reconstructed them with and without motion correction. For each simulation, motion-corrected (MC) images were compared with the motion-free reference, by visual inspection and with quantitative similarity metrics. Motion correction improved similarity metrics in all simulations. Of the 270 simulations performed with moderate or less motion, only 2 resulted in visible residual artefacts in the MC images. The maximum range of motion in these simulations would encompass that encountered in the vast majority of clinical scans. With severe motion, residual artefacts were observed in about 60% of the simulations. We also evaluated a new method of mapping local data sufficiency based on the degree to which Tuy’s condition is locally satisfied, and observed that areas with high Tuy values corresponded to the locations of residual artefacts in the MC images. We conclude that our method can provide accurate and artefact-free MC images with most types of head motion likely to be encountered in CT imaging, provided that the motion can be accurately determined.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.
NASA Astrophysics Data System (ADS)
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2008-02-01
Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.
Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method
2016-01-01
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
NASA Astrophysics Data System (ADS)
Xiong, Ming; Zheng, Huinan; Wu, S. T.; Wang, Yuming; Wang, Shui
2007-11-01
Numerical studies of the interplanetary "multiple magnetic clouds (Multi-MC)" are performed by a 2.5-dimensional ideal magnetohydrodynamic (MHD) model in the heliospheric meridional plane. Both slow MC1 and fast MC2 are initially emerged along the heliospheric equator, one after another with different time intervals. The coupling of two MCs could be considered as the comprehensive interaction between two systems, each comprising of an MC body and its driven shock. The MC2-driven shock and MC2 body are successively involved into interaction with MC1 body. The momentum is transferred from MC2 to MC1. After the passage of MC2-driven shock front, magnetic field lines in MC1 medium previously compressed by MC2-driven shock are prevented from being restored by the MC2 body pushing. MC1 body undergoes the most violent compression from the ambient solar wind ahead, continuous penetration of MC2-driven shock through MC1 body, and persistent pushing of MC2 body at MC1 tail boundary. As the evolution proceeds, the MC1 body suffers from larger and larger compression, and its original vulnerable magnetic elasticity becomes stiffer and stiffer. So there exists a maximum compressibility of Multi-MC when the accumulated elasticity can balance the external compression. This cutoff limit of compressibility mainly decides the maximally available geoeffectiveness of Multi-MC because the geoeffectiveness enhancement of MCs interacting is ascribed to the compression. Particularly, the greatest geoeffectiveness is excited among all combinations of each MC helicity, if magnetic field lines in the interacting region of Multi-MC are all southward. Multi-MC completes its final evolutionary stage when the MC2-driven shock is merged with MC1-driven shock into a stronger compound shock. With respect to Multi-MC geoeffectiveness, the evolution stage is a dominant factor, whereas the collision intensity is a subordinate one. The magnetic elasticity, magnetic helicity of each MC, and compression between each other are the key physical factors for the formation, propagation, evolution, and resulting geoeffectiveness of interplanetary Multi-MC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Schuemann, J
2015-06-15
Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
On the definition of a Monte Carlo model for binary crystal growth.
Los, J H; van Enckevort, W J P; Meekes, H; Vlieg, E
2007-02-01
We show that consistency of the transition probabilities in a lattice Monte Carlo (MC) model for binary crystal growth with the thermodynamic properties of a system does not guarantee the MC simulations near equilibrium to be in agreement with the thermodynamic equilibrium phase diagram for that system. The deviations remain small for systems with small bond energies, but they can increase significantly for systems with large melting entropy, typical for molecular systems. These deviations are attributed to the surface kinetics, which is responsible for a metastable zone below the liquidus line where no growth occurs, even in the absence of a 2D nucleation barrier. Here we propose an extension of the MC model that introduces a freedom of choice in the transition probabilities while staying within the thermodynamic constraints. This freedom can be used to eliminate the discrepancy between the MC simulations and the thermodynamic equilibrium phase diagram. Agreement is achieved for that choice of the transition probabilities yielding the fastest decrease of the free energy (i.e., largest growth rate) of the system at a temperature slightly below the equilibrium temperature. An analytical model is developed, which reproduces quite well the MC results, enabling a straightforward determination of the optimal set of transition probabilities. Application of both the MC and analytical model to conditions well away from equilibrium, giving rise to kinetic phase diagrams, shows that the effect of kinetics on segregation is even stronger than that predicted by previous models.
Mutual coupling, channel model, and BER for curvilinear antenna arrays
NASA Astrophysics Data System (ADS)
Huang, Zhiyong
This dissertation introduces a wireless communications system with an adaptive beam-former and investigates its performance with different antenna arrays. Mutual coupling, real antenna elements and channel models are included to examine the system performance. In a beamforming system, mutual coupling (MC) among the elements can significantly degrade the system performance. However, MC effects can be compensated if an accurate model of mutual coupling is available. A mutual coupling matrix model is utilized to compensate mutual coupling in the beamforming of a uniform circular array (UCA). Its performance is compared with other models in uplink and downlink beamforming scenarios. In addition, the predictions are compared with measurements and verified with results from full-wave simulations. In order to accurately investigate the minimum mean-square-error (MSE) of an adaptive array in MC, two different noise models, the environmental and the receiver noise, are modeled. The minimum MSEs with and without data domain MC compensation are analytically compared. The influence of mutual coupling on the convergence is also examined. In addition, the weight compensation method is proposed to attain the desired array pattern. Adaptive arrays with different geometries are implemented with the minimum MSE algorithm in the wireless communications system to combat interference at the same frequency. The bit-error-rate (BER) of systems with UCA, uniform rectangular array (URA) and UCA with center element are investigated in additive white Gaussian noise plus well-separated signals or random direction signals scenarios. The output SINR of an adaptive array with multiple interferers is analytically examined. The influence of the adaptive algorithm convergence on the BER is investigated. The UCA is then investigated in a narrowband Rician fading channel. The channel model is built and the space correlations are examined. The influence of the number of signal paths, number of the interferers, Doppler spread and convergence are investigated. The tracking mode is introduced to the adaptive array system, and it further improves the BER. The benefit of using faster data rate (wider bandwidth) is discussed. In order to have better performance in a 3D space, the geometries of uniform spherical array (USAs) are presented and different configurations of USAs are discussed. The LMS algorithm based on temporal a priori information is applied to UCAs and USAs to beamform the patterns. Their performances are compared based on simulation results. Based on the analytical and simulation results, it can be concluded that mutual coupling slightly influences the performance of the adaptive array in communication systems. In addition, arrays with curvilinear geometries perform well in AWGN and fading channels.
Singh, Kunwar; Tiwari, Satish Chandra; Gupta, Maneesha
2014-01-01
The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C(2)MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C(2)MOS based flip-flop designs mC(2)MOSff1 and mC(2)MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC(2)MOSff1. Postlayout simulations indicate that mC(2)MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes.
Tiwari, Satish Chandra; Gupta, Maneesha
2014-01-01
The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C2MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C2MOS based flip-flop designs mC2MOSff1 and mC2MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC2MOSff1. Postlayout simulations indicate that mC2MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes. PMID:24723808
SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.
Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S
2012-06-01
With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.
Kern, Christoph
2016-03-23
This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.
Designing new guides and instruments using McStas
NASA Astrophysics Data System (ADS)
Farhi, E.; Hansen, T.; Wildes, A.; Ghosh, R.; Lefmann, K.
With the increasing complexity of modern neutron-scattering instruments, the need for powerful tools to optimize their geometry and physical performances (flux, resolution, divergence, etc.) has become essential. As the usual analytical methods reach their limit of validity in the description of fine effects, the use of Monte Carlo simulations, which can handle these latter, has become widespread. The McStas program was developed at Riso National Laboratory in order to provide neutron scattering instrument scientists with an efficient and flexible tool for building Monte Carlo simulations of guides, neutron optics and instruments [1]. To date, the McStas package has been extensively used at the Institut Laue-Langevin, Grenoble, France, for various studies including cold and thermal guides with ballistic geometry, diffractometers, triple-axis, backscattering and time-of-flight spectrometers [2]. In this paper, we present some simulation results concerning different guide geometries that may be used in the future at the Institut Laue-Langevin. Gain factors ranging from two to five may be obtained for the integrated intensities, depending on the exact geometry, the guide coatings and the source.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Charge Structure and Counterion Distribution in Hexagonal DNA Liquid Crystal
Dai, Liang; Mu, Yuguang; Nordenskiöld, Lars; Lapp, Alain; van der Maarel, Johan R. C.
2007-01-01
A hexagonal liquid crystal of DNA fragments (double-stranded, 150 basepairs) with tetramethylammonium (TMA) counterions was investigated with small angle neutron scattering (SANS). We obtained the structure factors pertaining to the DNA and counterion density correlations with contrast matching in the water. Molecular dynamics (MD) computer simulation of a hexagonal assembly of nine DNA molecules showed that the inter-DNA distance fluctuates with a correlation time around 2 ns and a standard deviation of 8.5% of the interaxial spacing. The MD simulation also showed a minimal effect of the fluctuations in inter-DNA distance on the radial counterion density profile and significant penetration of the grooves by TMA. The radial density profile of the counterions was also obtained from a Monte Carlo (MC) computer simulation of a hexagonal array of charged rods with fixed interaxial spacing. Strong ordering of the counterions between the DNA molecules and the absence of charge fluctuations at longer wavelengths was shown by the SANS number and charge structure factors. The DNA-counterion and counterion structure factors are interpreted with the correlation functions derived from the Poisson-Boltzmann equation, MD, and MC simulation. Best agreement is observed between the experimental structure factors and the prediction based on the Poisson-Boltzmann equation and/or MC simulation. The SANS results show that TMA is too large to penetrate the grooves to a significant extent, in contrast to what is shown by MD simulation. PMID:17098791
Jover, J; Haslam, A J; Galindo, A; Jackson, G; Müller, E A
2012-10-14
We present a continuous pseudo-hard-sphere potential based on a cut-and-shifted Mie (generalized Lennard-Jones) potential with exponents (50, 49). Using this potential one can mimic the volumetric, structural, and dynamic properties of the discontinuous hard-sphere potential over the whole fluid range. The continuous pseudo potential has the advantage that it may be incorporated directly into off-the-shelf molecular-dynamics code, allowing the user to capitalise on existing hardware and software advances. Simulation results for the compressibility factor of the fluid and solid phases of our pseudo hard spheres are presented and compared both to the Carnahan-Starling equation of state of the fluid and published data, the differences being indistinguishable within simulation uncertainty. The specific form of the potential is employed to simulate flexible chains formed from these pseudo hard spheres at contact (pearl-necklace model) for m(c) = 4, 5, 7, 8, 16, 20, 100, 201, and 500 monomer segments. The compressibility factor of the chains per unit of monomer, m(c), approaches a limiting value at reasonably small values, m(c) < 50, as predicted by Wertheim's first order thermodynamic perturbation theory. Simulation results are also presented for highly asymmetric mixtures of pseudo hard spheres, with diameter ratios of 3:1, 5:1, 20:1 over the whole composition range.
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Lee, D.; Oreopoulos, L.; Barahona, D.; Nenes, A.; Suarez, M. J.
2012-01-01
A revised version of the Microphysics of clouds with Relaxed Arakawa-Schubert and Aerosol-Cloud interaction (McRAS-AC), including, among others, the Barahona and Nenes ice nucleation parameterization, is implemented in the GEOS-5 AGCM. Various fields from a 10-year long integration of the AGCM with McRAS-AC were compared with their counterparts from an integration of the baseline GEOS-5 AGCM, and with satellite data as observations. Generally using McRAS-AC reduced biases in cloud fields and cloud radiative effects are much better over most of the regions of the Earth. Two weaknesses are identified in the McRAS-AC runs, namely, too few cloud particles around 40S-60S, and too high cloud water path during northern hemisphere summer over the Gulf Stream and North Pacific. Sensitivity analyses showed that these biases potentially originated from biases in the aerosol input. The first bias is largely eliminated in a sensitivity test using 50% smaller aerosol particles, while the second bias is much reduced when interactive aerosol chemistry was turned on. The main drawback of McRAS-AC is dearth of low-level marine stratus clouds, probably due to lack of dry-convection, not yet implemented into the cloud scheme. Despite these biases, McRAS-AC does simulate realistic clouds and their optical properties that can improve with better aerosol-input and thereby has the potential to be a valuable tool for climate modeling research because of its aerosol indirect effect simulation capabilities involving prediction of cloud particle number concentration and effective particle size for both convective and stratiform clouds is quite realistic.
A medical image-based graphical platform -- features, applications and relevance for brachytherapy.
Fonseca, Gabriel P; Reniers, Brigitte; Landry, Guillaume; White, Shane; Bellezzo, Murillo; Antunes, Paula C G; de Sales, Camila P; Welteman, Eduardo; Yoriyaz, Hélio; Verhaegen, Frank
2014-01-01
Brachytherapy dose calculation is commonly performed using the Task Group-No 43 Report-Updated protocol (TG-43U1) formalism. Recently, a more accurate approach has been proposed that can handle tissue composition, tissue density, body shape, applicator geometry, and dose reporting either in media or water. Some model-based dose calculation algorithms are based on Monte Carlo (MC) simulations. This work presents a software platform capable of processing medical images and treatment plans, and preparing the required input data for MC simulations. The A Medical Image-based Graphical platfOrm-Brachytherapy module (AMIGOBrachy) is a user interface, coupled to the MCNP6 MC code, for absorbed dose calculations. The AMIGOBrachy was first validated in water for a high-dose-rate (192)Ir source. Next, dose distributions were validated in uniform phantoms consisting of different materials. Finally, dose distributions were obtained in patient geometries. Results were compared against a treatment planning system including a linear Boltzmann transport equation (LBTE) solver capable of handling nonwater heterogeneities. The TG-43U1 source parameters are in good agreement with literature with more than 90% of anisotropy values within 1%. No significant dependence on the tissue composition was observed comparing MC results against an LBTE solver. Clinical cases showed differences up to 25%, when comparing MC results against TG-43U1. About 92% of the voxels exhibited dose differences lower than 2% when comparing MC results against an LBTE solver. The AMIGOBrachy can improve the accuracy of the TG-43U1 dose calculation by using a more accurate MC dose calculation algorithm. The AMIGOBrachy can be incorporated in clinical practice via a user-friendly graphical interface. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Bhola, Ruchi; Bhalla, Swaran; Gupta, Radha; Singh, Ishwar; Kumar, Sunil
2014-05-01
Literature suggests that glottic view is better when using McGrath(®) Video laryngoscope and Truview(®) in comparison with McIntosh blade. The purpose of this study was to evaluate the effectiveness of McGrath Video laryngoscope in comparison with Truview laryngoscope for tracheal intubation in patients with simulated cervical spine injury using manual in-line stabilisation. This prospective randomised study was undertaken in operation theatre of a tertiary referral centre after approval from the Institutional Review Board. A total of 100 consenting patients presenting for elective surgery requiring tracheal intubation were randomly assigned to undergo intubation using McGrath(®) Video laryngoscope (n = 50) or Truview(®) (n = 50) laryngoscope. In all patients, we applied manual-in-line stabilisation of the cervical spine throughout the airway management. Statistical testing was conducted with the statistical package for the social science system version SPSS 17.0. Demographic data, airway assessment and haemodynamics were compared using the Chi-square test. A P < 0.05 was considered significant. The time to successful intubation was less with McGrath video laryngoscope when compared to Truview (30.02 s vs. 38.72 s). However, there was no significant difference between laryngoscopic views obtained in both groups. The number of second intubation attempts required and incidence of complications were negligible with both devices. Success rate of intubation with both devices was 100%. Intubation with McGrath Video laryngoscope caused lesser alterations in haemodynamics. Both laryngoscopes are reliable in case of simulated cervical spine injury using manual-in-line stabilisation with 100% success rate and good glottic view.
Feaster, Toby D.; Westcott, Nancy E.; Hudson, Robert J.M.; Conrads, Paul; Bradley, Paul M.
2012-01-01
Rainfall is an important forcing function in most watershed models. As part of a previous investigation to assess interactions among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations in the Edisto River Basin, the topography-based hydrological model (TOPMODEL) was applied in the McTier Creek watershed in Aiken County, South Carolina. Measured rainfall data from six National Weather Service (NWS) Cooperative (COOP) stations surrounding the McTier Creek watershed were used to calibrate the McTier Creek TOPMODEL. Since the 1990s, the next generation weather radar (NEXRAD) has provided rainfall estimates at a finer spatial and temporal resolution than the NWS COOP network. For this investigation, NEXRAD-based rainfall data were generated at the NWS COOP stations and compared with measured rainfall data for the period June 13, 2007, to September 30, 2009. Likewise, these NEXRAD-based rainfall data were used with TOPMODEL to simulate streamflow in the McTier Creek watershed and then compared with the simulations made using measured rainfall data. NEXRAD-based rainfall data for non-zero rainfall days were lower than measured rainfall data at all six NWS COOP locations. The total number of concurrent days for which both measured and NEXRAD-based data were available at the COOP stations ranged from 501 to 833, the number of non-zero days ranged from 139 to 209, and the total difference in rainfall ranged from -1.3 to -21.6 inches. With the calibrated TOPMODEL, simulations using NEXRAD-based rainfall data and those using measured rainfall data produce similar results with respect to matching the timing and shape of the hydrographs. Comparison of the bias, which is the mean of the residuals between observed and simulated streamflow, however, reveals that simulations using NEXRAD-based rainfall tended to underpredict streamflow overall. Given that the total NEXRAD-based rainfall data for the simulation period is lower than the total measured rainfall at the NWS COOP locations, this bias would be expected. Therefore, to better assess the use of NEXRAD-based rainfall estimates as compared to NWS COOP rainfall data on the hydrologic simulations, TOPMODEL was recalibrated and updated simulations were made using the NEXRAD-based rainfall data. Comparisons of observed and simulated streamflow show that the TOPMODEL results using measured rainfall data and NEXRAD-based rainfall are comparable. Nonetheless, TOPMODEL simulations using NEXRAD-based rainfall still tended to underpredict total streamflow volume, although the magnitude of differences were similar to the simulations using measured rainfall. The McTier Creek watershed was subdivided into 12 subwatersheds and NEXRAD-based rainfall data were generated for each subwatershed. Simulations of streamflow were generated for each subwatershed using NEXRAD-based rainfall and compared with subwatershed simulations using measured rainfall data, which unlike the NEXRAD-based rainfall were the same data for all subwatersheds (derived from a weighted average of the six NWS COOP stations surrounding the basin). For the two simulations, subwatershed streamflow were summed and compared to streamflow simulations at two U.S. Geological Survey streamgages. The percentage differences at the gage near Monetta, South Carolina, were the same for simulations using measured rainfall data and NEXRAD-based rainfall. At the gage near New Holland, South Carolina, the percentage differences using the NEXRAD-based rainfall were twice as much as those using the measured rainfall. Single-mass curve comparisons showed an increase in the total volume of rainfall from north to south. Similar comparisons of the measured rainfall at the NWS COOP stations showed similar percentage differences, but the NEXRAD-based rainfall variations occurred over a much smaller distance than the measured rainfall. Nonetheless, it was concluded that in some cases, using NEXRAD-based rainfall data in TOPMODEL streamflow simulations may provide an effective alternative to using measured rainfall data. For this investigation, however, TOPMODEL streamflow simulations using NEXRAD-based rainfall data for both calibration and simulations did not show significant improvements with respect to matching observed streamflow over simulations generated using measured rainfall data.
Microcystin distribution in physical size class separations of natural plankton communities
Graham, J.L.; Jones, J.R.
2007-01-01
Phytoplankton communities in 30 northern Missouri and Iowa lakes were physically separated into 5 size classes (>100 ??m, 53-100 ??m, 35-53 ??m, 10-35 ??m, 1-10 ??m) during 15-21 August 2004 to determine the distribution of microcystin (MC) in size fractionated lake samples and assess how net collections influence estimates of MC concentration. MC was detected in whole water (total) from 83% of takes sampled, and total MC values ranged from 0.1-7.0 ??g/L (mean = 0.8 ??g/L). On average, MC in the > 100 ??m size class comprised ???40% of total MC, while other individual size classes contributed 9-20% to total MC. MC values decreased with size class and were significantly greater in the >100 ??m size class (mean = 0.5 ??g /L) than the 35-53 ??m (mean = 0.1 ??g/L), 10-35 ??m (mean = 0.0 ??g/L), and 1-10 ??m (mean = 0.0 ??g/L) size classes (p < 0.01). MC values in nets with 100-??m, 53-??m, 35-??m, and 10-??m mesh were cumulatively summed to simulate the potential bias of measuring MC with various size plankton nets. On average, a 100-??m net underestimated total MC by 51%, compared to 37% for a 53-??m net, 28% for a 35-??m net, and 17% for a 10-??m net. While plankton nets consistently underestimated total MC, concentration of algae with net sieves allowed detection of MC at low levels (???0.01 ??/L); 93% of lakes had detectable levels of MC in concentrated samples. Thus, small mesh plankton nets are an option for documenting MC occurrence, but whole water samples should be collected to characterize total MC concentrations. ?? Copyright by the North American Lake Management Society 2007.
Radio Frequency Scanning and Simulation of Oriented Strand Board Material Property
NASA Astrophysics Data System (ADS)
Liu, Xiaojian; Zhang, Jilei; Steele, Philip. H.; Donohoe, J. Patrick
2008-02-01
Oriented strandboard (OSB) is a wood composite product with the largest market share in U.S. residential and commercial construction. Wood specific gravity (SG) and moisture content (MC) play an important role in the OSB manufacturing process. They are the two of the critical variables that manufacturers are required to monitor, locate, and control in order to produce a product with consistent quality. In this study, radio frequency scanning nondestructive evaluation (NDE) technologies evaluated the local area MC and SG of OSB panels following panel production by hot pressing. A finite element software simulation tool was used to optimize the sensor geometry and for investigating the interaction between electromagnetic field and wood dielectric properties. Our results indicate the RF scanning response is closely correlated to the MC and SG variations in OSB panels. Radio frequency NDE appears to have potential as an effective method for insuring OSB panel quality during manufacturing.
Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test
NASA Astrophysics Data System (ADS)
Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.
We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.
NASA Astrophysics Data System (ADS)
He, An; Gong, Jiaming; Shikazono, Naoki
2018-05-01
In the present study, a model is introduced to correlate the electrochemical performance of solid oxide fuel cell (SOFC) with the 3D microstructure reconstructed by focused ion beam scanning electron microscopy (FIB-SEM) in which the solid surface is modeled by the marching cubes (MC) method. Lattice Boltzmann method (LBM) is used to solve the governing equations. In order to maintain the geometries reconstructed by the MC method, local effective diffusivities and conductivities computed based on the MC geometries are applied in each grid, and partial bounce-back scheme is applied according to the boundary predicted by the MC method. From the tortuosity factor and overpotential calculation results, it is concluded that the MC geometry drastically improves the computational accuracy by giving more precise topology information.
Electrons to Reactors Multiscale Modeling: Catalytic CO Oxidation over RuO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutton, Jonathan E.; Lorenzi, Juan M.; Krogel, Jaron T.
First-principles kinetic Monte Carlo (1p-kMC) simulations for CO oxidation on two RuO 2 facets, RuO 2(110) and RuO 2(111), were coupled to the computational fluid dynamics (CFD) simulations package MFIX, and reactor-scale simulations were then performed. 1p-kMC coupled with CFD has recently been shown as a feasible method for translating molecular scale mechanistic knowledge to the reactor scale, enabling comparisons to in situ and online experimental measurements. Only a few studies with such coupling have been published. This work incorporates multiple catalytic surface facets into the scale-coupled simulation, and three possibilities were investigated: the two possibilities of each facet individuallymore » being the dominant phase in the reactor, and also the possibility that both facets were present on the catalyst particles in the ratio predicted by an ab initio thermodynamics-based Wulff construction. When lateral interactions between adsorbates were included in the 1p-kMC simulations, the two surfaces, RuO 2(110) and RuO 2(111), were found to be of similar order-of-magnitude in activity for the pressure range of 1 × 10 –4 bar to 1 bar, with the RuO 2(110) surface-termination showing more simulated activity than the RuO 2(111) surface-termination. Coupling between the 1p-kMC and CFD was achieved with a lookup table generated by the error-based modified Shepard interpolation scheme. Isothermal reactor scale simulations were performed and compared to two separate experimental studies, conducted with reactant partial pressures of ≤0.1 bar. Simulations without an isothermality restriction were also conducted and showed that the simulated temperature gradient across the catalytic reactor bed is <0.5 K, which validated the use of the isothermality restriction for investigating the reactor-scale phenomenological temperature dependences. The approach with the Wulff construction based reactor simulations reproduced a trend similar to one experimental data set relatively well, with the (110) surface being more active at higher temperaures; in contrast, for the other experimental data set, our reactor simulations achieve surprisingly and perhaps fortuitously good agreement with the activity and phenomenological pressure dependence when it is assumed that the (111) facet is the only active facet present. Lastly, the active phase of catalytic CO oxidation over RuO 2 remains unsettled, but the present study presents proof of principle (and progress) toward more accurate multiscale modeling from electrons to reactors and new simulation results.« less
Electrons to Reactors Multiscale Modeling: Catalytic CO Oxidation over RuO 2
Sutton, Jonathan E.; Lorenzi, Juan M.; Krogel, Jaron T.; ...
2018-04-20
First-principles kinetic Monte Carlo (1p-kMC) simulations for CO oxidation on two RuO 2 facets, RuO 2(110) and RuO 2(111), were coupled to the computational fluid dynamics (CFD) simulations package MFIX, and reactor-scale simulations were then performed. 1p-kMC coupled with CFD has recently been shown as a feasible method for translating molecular scale mechanistic knowledge to the reactor scale, enabling comparisons to in situ and online experimental measurements. Only a few studies with such coupling have been published. This work incorporates multiple catalytic surface facets into the scale-coupled simulation, and three possibilities were investigated: the two possibilities of each facet individuallymore » being the dominant phase in the reactor, and also the possibility that both facets were present on the catalyst particles in the ratio predicted by an ab initio thermodynamics-based Wulff construction. When lateral interactions between adsorbates were included in the 1p-kMC simulations, the two surfaces, RuO 2(110) and RuO 2(111), were found to be of similar order-of-magnitude in activity for the pressure range of 1 × 10 –4 bar to 1 bar, with the RuO 2(110) surface-termination showing more simulated activity than the RuO 2(111) surface-termination. Coupling between the 1p-kMC and CFD was achieved with a lookup table generated by the error-based modified Shepard interpolation scheme. Isothermal reactor scale simulations were performed and compared to two separate experimental studies, conducted with reactant partial pressures of ≤0.1 bar. Simulations without an isothermality restriction were also conducted and showed that the simulated temperature gradient across the catalytic reactor bed is <0.5 K, which validated the use of the isothermality restriction for investigating the reactor-scale phenomenological temperature dependences. The approach with the Wulff construction based reactor simulations reproduced a trend similar to one experimental data set relatively well, with the (110) surface being more active at higher temperaures; in contrast, for the other experimental data set, our reactor simulations achieve surprisingly and perhaps fortuitously good agreement with the activity and phenomenological pressure dependence when it is assumed that the (111) facet is the only active facet present. Lastly, the active phase of catalytic CO oxidation over RuO 2 remains unsettled, but the present study presents proof of principle (and progress) toward more accurate multiscale modeling from electrons to reactors and new simulation results.« less
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Du, X; Su, L
2014-06-15
Purpose: To compare the CT doses derived from the experiments and GPU-based Monte Carlo (MC) simulations, using a human cadaver and ATOM phantom. Methods: The cadaver of an 88-year old male and the ATOM phantom were scanned by a GE LightSpeed Pro 16 MDCT. For the cadaver study, the Thimble chambers (Model 10×5−0.6CT and 10×6−0.6CT) were used to measure the absorbed dose in different deep and superficial organs. Whole-body scans were first performed to construct a complete image database for MC simulations. Abdomen/pelvis helical scans were then conducted using 120/100 kVps, 300 mAs and a pitch factor of 1.375:1. Formore » the ATOM phantom study, the OSL dosimeters were used and helical scans were performed using 120 kVp and x, y, z tube current modulation (TCM). For the MC simulations, sufficient particles were run in both cases such that the statistical errors of the results by ARCHER-CT were limited to 1%. Results: For the human cadaver scan, the doses to the stomach, liver, colon, left kidney, pancreas and urinary bladder were compared. The difference between experiments and simulations was within 19% for the 120 kVp and 25% for the 100 kVp. For the ATOM phantom scan, the doses to the lung, thyroid, esophagus, heart, stomach, liver, spleen, kidneys and thymus were compared. The difference was 39.2% for the esophagus, and within 16% for all other organs. Conclusion: In this study the experimental and simulated CT doses were compared. Their difference is primarily attributed to the systematic errors of the MC simulations, including the accuracy of the bowtie filter modeling, and the algorithm to generate voxelized phantom from DICOM images. The experimental error is considered small and may arise from the dosimeters. R01 grant (R01EB015478) from National Institute of Biomedical Imaging and Bioengineering.« less
Modelling the structural response of cotton plants to mepiquat chloride and population density
Gu, Shenghao; Evers, Jochem B.; Zhang, Lizhen; Mao, Lili; Zhang, Siping; Zhao, Xinhua; Liu, Shaodong; van der Werf, Wopke; Li, Zhaohu
2014-01-01
Background and Aims Cotton (Gossypium hirsutum) has indeterminate growth. The growth regulator mepiquat chloride (MC) is used worldwide to restrict vegetative growth and promote boll formation and yield. The effects of MC are modulated by complex interactions with growing conditions (nutrients, weather) and plant population density, and as a result the effects on plant form are not fully understood and are difficult to predict. The use of MC is thus hard to optimize. Methods To explore crop responses to plant density and MC, a functional–structural plant model (FSPM) for cotton (named CottonXL) was designed. The model was calibrated using 1 year's field data, and validated by using two additional years of detailed experimental data on the effects of MC and plant density in stands of pure cotton and in intercrops of cotton with wheat. CottonXL simulates development of leaf and fruits (square, flower and boll), plant height and branching. Crop development is driven by thermal time, population density, MC application, and topping of the main stem and branches. Key Results Validation of the model showed good correspondence between simulated and observed values for leaf area index with an overall root-mean-square error of 0·50 m2 m−2, and with an overall prediction error of less than 10 % for number of bolls, plant height, number of fruit branches and number of phytomers. Canopy structure became more compact with the decrease of leaf area index and internode length due to the application of MC. Moreover, MC did not have a substantial effect on boll density but increased lint yield at higher densities. Conclusions The model satisfactorily represents the effects of agronomic measures on cotton plant structure. It can be used to identify optimal agronomic management of cotton to achieve optimal plant structure for maximum yield under varying environmental conditions. PMID:24489020
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-21
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Wonjung; Kovacic, Gregor; Cai, David
Using the (1+1)D Majda-McLaughlin-Tabak model as an example, we present an extension of the wave turbulence (WT) theory to systems with strong nonlinearities. We demonstrate that nonlinear wave interactions renormalize the dynamics, leading to (i) a possible destruction of scaling structures in the bare wave systems and a drastic deformation of the resonant manifold even at weak nonlinearities, and (ii) creation of nonlinear resonance quartets in wave systems for which there would be no resonances as predicted by the linear dispersion relation. Finally, we derive an effective WT kinetic equation and show that our prediction of the renormalized Rayleigh-Jeans distributionmore » is in excellent agreement with the simulation of the full wave system in equilibrium.« less
Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Stierstorfer, Karl; Kappler, Steffen
2016-12-01
An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the pixels. This is called double-counting with charge sharing. (A photoelectric effect with K-shell fluorescence x-ray emission would result in double-counting as well). As a result, PCD data are spatially and energetically correlated, although the output of individual PCD pixels is Poisson distributed. Major problems include the lack of a detector noise model for the spatio-energetic cross talk and lack of a computationally efficient simulation tool for generating correlated Poisson data. A Monte Carlo (MC) simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, the authors developed a new detector model and implemented it in an efficient software simulator that uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account: (1) detection efficiency; (2) incomplete charge collection and ballistic effect; (3) interaction with PCDs via photoelectric effect (with or without K-shell fluorescence x-ray emission, which may escape from the PCDs or be reabsorbed); and (4) electronic noise. The correlation was modeled by using these two simplifying assumptions: energy conservation and mutual exclusiveness. The mutual exclusiveness is that no more than two pixels measure energy from one photon. The effect of model parameters has been studied and results were compared with MC simulations. The agreement, with respect to the spectrum, was evaluated using the reduced χ 2 statistics or a weighted sum of squared errors, χ red 2 (≥1), where χ red 2 =1 indicates a perfect fit. The model produced spectra with flat field irradiation that qualitatively agree with previous studies. The spectra generated with different model and geometry parameters allowed for understanding the effect of the parameters on the spectrum and the correlation of data. The agreement between the model and MC data was very strong. The mean spectra with 90 keV and 140 kVp agreed exceptionally well: χ red 2 values were 1.049 with 90 keV data and 1.007 with 140 kVp data. The degrees of cross talk (in terms of the relative increase from single pixel irradiation to flat field irradiation) were 22% with 90 keV and 19% with 140 kVp for MC simulations, while they were 21% and 17%, respectively, for the model. The covariance was in strong agreement qualitatively, although it was overestimated. The noisy data generation was very efficient, taking less than a CPU minute as opposed to CPU hours for MC simulators. The authors have developed a novel, computationally efficient PCD model that takes into account double-counting and resulting spatio-energetic correlation between PCD pixels. The MC simulation validated the accuracy.
A Collection of Nonlinear Aircraft Simulations in MATLAB
NASA Technical Reports Server (NTRS)
Garza, Frederico R.; Morelli, Eugene A.
2003-01-01
Nonlinear six degree-of-freedom simulations for a variety of aircraft were created using MATLAB. Data for aircraft geometry, aerodynamic characteristics, mass / inertia properties, and engine characteristics were obtained from open literature publications documenting wind tunnel experiments and flight tests. Each nonlinear simulation was implemented within a common framework in MATLAB, and includes an interface with another commercially-available program to read pilot inputs and produce a three-dimensional (3-D) display of the simulated airplane motion. Aircraft simulations include the General Dynamics F-16 Fighting Falcon, Convair F-106B Delta Dart, Grumman F-14 Tomcat, McDonnell Douglas F-4 Phantom, NASA Langley Free-Flying Aircraft for Sub-scale Experimental Research (FASER), NASA HL-20 Lifting Body, NASA / DARPA X-31 Enhanced Fighter Maneuverability Demonstrator, and the Vought A-7 Corsair II. All nonlinear simulations and 3-D displays run in real time in response to pilot inputs, using contemporary desktop personal computer hardware. The simulations can also be run in batch mode. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. Since all the nonlinear simulations are implemented entirely in MATLAB, user-defined control laws can be added in a straightforward fashion, and the simulations are portable across various computing platforms. Routines for trim, linearization, and numerical integration are included. The general nonlinear simulation framework and the specifics for each particular aircraft are documented.
NASA Astrophysics Data System (ADS)
Dal Molin, J. P.; Caliri, A.
2018-01-01
Here we focus on the conformational search for the native structure when it is ruled by the hydrophobic effect and steric specificities coming from amino acids. Our main tool of investigation is a 3D lattice model provided by a ten-letter alphabet, the stereochemical model. This minimalist model was conceived for Monte Carlo (MC) simulations when one keeps in mind the kinetic behavior of protein-like chains in solution. We have three central goals here. The first one is to characterize the folding time (τ) by two distinct sampling methods, so we present two sets of 103 MC simulations for a fast protein-like sequence. The resulting sets of characteristic folding times, τ and τq were obtained by the application of the standard Metropolis algorithm (MA), as well as by an enhanced algorithm (Mq A). The finding for τq shows two things: (i) the chain-solvent hydrophobic interactions {hk } plus a set of inter-residues steric constraints {ci,j } are able to emulate the conformational search for the native structure. For each one of the 103MC performed simulations, the target is always found within a finite time window; (ii) the ratio τq / τ ≅ 1 / 10 suggests that the effect of local thermal fluctuations, encompassed by the Tsallis weight, provides to the chain an innate efficiency to escape from energetic and steric traps. We performed additional MC simulations with variations of our design rule to attest this first result, both algorithms the MA and the Mq A were applied to a restricted set of targets, a physical insight is provided. Our second finding was obtained by a set of 600 independent MC simulations, only performed with the Mq A applied to an extended set of 200 representative targets, our native structures. The results show how structural patterns should modulate τq, which cover four orders of magnitude; this finding is our second goal. The third, and last result, was obtained with a special kind of simulation performed with the purpose to explore a possible connection between the hydrophobic component of protein stability and the native structural topology. We simulated those same 200 targets again with the Mq A, only. However, this time we evaluated the relative frequency {ϕq } in which each target visits its corresponding native structure along an appropriate simulation time. Due to the presence of the hydrophobic effect in our approach we obtained a strong correlation between the stability and the folding rate (R = 0 . 85). So, as faster a sequence found its target, as larger is the hydrophobic component of its stability. The strong correlation fulfills our last goal. This final finding suggests that the hydrophobic effect could not be a general stabilizing factor for proteins.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z Y; Kremer, Kurt; Daoulas, Kostas Ch
2016-11-14
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
NASA Astrophysics Data System (ADS)
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
Surface tension of undercooled liquid cobalt
NASA Astrophysics Data System (ADS)
Yao, W. J.; Han, X. J.; Chen, M.; Wei, B.; Guo, Z. Y.
2002-08-01
This paper provides the results on experimentally measured and numerically predicted surface tensions of undercooled liquid cobalt. The experiments were performed by using the oscillation drop technique combined with electromagnetic levitation. The simulations are carried out with the Monte Carlo (MC) method, where the surface tension is predicted through calculations of the work of cohesion, and the interatomic interaction is described with an embedded-atom method. The maximum undercooling of the liquid cobalt is reached at 231 K (0.13Tm) in the experiment and 268 K (0.17Tm) in the simulation. The surface tension and its relationship with temperature obtained in the experiment and simulation are σexp = 1.93 - 0.000 33 (T - T m) N m-1 and σcal = 2.26 - 0.000 32 (T - T m) N m-1 respectively. The temperature dependence of the surface tension calculated from the MC simulation is in reasonable agreement with that measured in the experiment.
A mass reconstruction technique for a heavy resonance decaying to τ + τ -
NASA Astrophysics Data System (ADS)
Xia, Li-Gang
2016-11-01
For a resonance decaying to τ + τ -, it is difficult to reconstruct its mass accurately because of the presence of neutrinos in the decay products of the τ leptons. If the resonance is heavy enough, we show that its mass can be well determined by the momentum component of the τ decay products perpendicular to the velocity of the τ lepton, p ⊥, and the mass of the visible/invisible decay products, m vis/inv, for τ decaying to hadrons/leptons. By sampling all kinematically allowed values of p ⊥ and m vis/inv according to their joint probability distributions determined by the MC simulations, the mass of the mother resonance is assumed to lie at the position with the maximal probability. Since p ⊥ and m vis/inv are invariant under the boost in the τ lepton direction, the joint probability distributions are independent upon the τ’s origin. Thus this technique is able to determine the mass of an unknown resonance with no efficiency loss. It is tested using MC simulations of the physics processes pp → Z/h(125)/h(750) + X → ττ + X at 13 TeV. The ratio of the full width at half maximum and the peak value of the reconstructed mass distribution is found to be 20%-40% using the information of missing transverse energy. Supported by General Financial Grant from the China Postdoctoral Science Foundation (2015M581062)
Turner, Andrew D; Waack, Julia; Lewis, Adam; Edwards, Christine; Lawton, Linda
2018-02-01
A simple, rapid UHPLC-MS/MS method has been developed and optimised for the quantitation of microcystins and nodularin in wide variety of sample matrices. Microcystin analogues targeted were MC-LR, MC-RR, MC-LA, MC-LY, MC-LF, LC-LW, MC-YR, MC-WR, [Asp3] MC-LR, [Dha7] MC-LR, MC-HilR and MC-HtyR. Optimisation studies were conducted to develop a simple, quick and efficient extraction protocol without the need for complex pre-analysis concentration procedures, together with a rapid sub 5min chromatographic separation of toxins in shellfish and algal supplement tablet powders, as well as water and cyanobacterial bloom samples. Validation studies were undertaken on each matrix-analyte combination to the full method performance characteristics following international guidelines. The method was found to be specific and linear over the full calibration range. Method sensitivity in terms of limits of detection, quantitation and reporting were found to be significantly improved in comparison to LC-UV methods and applicable to the analysis of each of the four matrices. Overall, acceptable recoveries were determined for each of the matrices studied, with associated precision and within-laboratory reproducibility well within expected guidance limits. Results from the formalised ruggedness analysis of all available cyanotoxins, showed that the method was robust for all parameters investigated. The results presented here show that the optimised LC-MS/MS method for cyanotoxins is fit for the purpose of detection and quantitation of a range of microcystins and nodularin in shellfish, algal supplement tablet powder, water and cyanobacteria. The method provides a valuable early warning tool for the rapid, routine extraction and analysis of natural waters, cyanobacterial blooms, algal powders, food supplements and shellfish tissues, enabling monitoring labs to supplement traditional microscopy techniques and report toxicity results within a short timeframe of sample receipt. The new method, now accredited to ISO17025 standard, is simple, quick, applicable to multiple matrices and is highly suitable for use as a routine, high-throughout, fast turnaround regulatory monitoring tool. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.; Mlawer, E.; Morcrette, J.-J.; O'Hirok, W.; Räisänen, P.; Ramaswamy, V.; Ritter, B.; Rozanov, E.; Schlesinger, M.; Shibata, K.; Sporyshev, P.; Sun, Z.; Wendisch, M.; Wood, N.; Yang, F.
2003-08-01
The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for the clear and overcast cases. The majority of 1D codes fall into the extreme category of maximum/random overlap of PPH clouds and thus generally disagree with full 3D benchmark values. Given the fairly limited scope of these tests and the inability of any one code to perform extremely well for all cases begs the question that a paradigm shift is due for modeling 1D solar fluxes for cloudy atmospheres.
Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations
NASA Astrophysics Data System (ADS)
Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa
2017-05-01
We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
NASA Astrophysics Data System (ADS)
Dünser, Simon; Meyer, Daniel W.
2016-06-01
In most groundwater aquifers, dispersion of tracers is dominated by flow-field inhomogeneities resulting from the underlying heterogeneous conductivity or transmissivity field. This effect is referred to as macrodispersion. Since in practice, besides a few point measurements the complete conductivity field is virtually never available, a probabilistic treatment is needed. To quantify the uncertainty in tracer concentrations from a given geostatistical model for the conductivity, Monte Carlo (MC) simulation is typically used. To avoid the excessive computational costs of MC, the polar Markovian velocity process (PMVP) model was recently introduced delivering predictions at about three orders of magnitude smaller computing times. In artificial test cases, the PMVP model has provided good results in comparison with MC. In this study, we further validate the model in a more challenging and realistic setup. The setup considered is derived from the well-known benchmark macrodispersion experiment (MADE), which is highly heterogeneous and non-stationary with a large number of unevenly scattered conductivity measurements. Validations were done against reference MC and good overall agreement was found. Moreover, simulations of a simplified setup with a single measurement were conducted in order to reassess the model's most fundamental assumptions and to provide guidance for model improvements.
Gravity affects the responsiveness of Runx2 to 1, 25-dihydroxyvitamin D3 (VD3)
NASA Astrophysics Data System (ADS)
Guo, Feima; Dai, Zhongquan; Wu, Feng; Liu, Zhaoxia; Tan, Yingjun; Wan, Yumin; Shang, Peng; Li, Yinghui
2013-03-01
Bone loss resulting from spaceflight is mainly caused by decreased bone formation, and decreased osteoblast proliferation and differentiation. Transcription factor Runx2 plays an important role in osteoblast differentiation and function by responding to microenvironment changes including cytokine and mechanical factors. The effects of 1, 25-dihydroxyvitamin D3 (VD3) on Runx2 in terms of mechanical competence is far less clear. This study describes how gravity affects the response of Runx2 to VD3. A MC3T3-6OSE2-Luc osteoblast model was constructed in which the activity of Runx2 was reflected by reporter luciferase activity identifed by bone-related cytokines. The results showed that luciferase activity in MC3T3-6OSE2-Luc cells transfected with Runx2 was twice that of the vacant vector. Alkaline phosphatase (ALP) activity was increased in MC3T3-6OSE2-Luc cells by different concentrations of IGF-I and BMP2. MC3T3-6OSE2-Luc cells were cultured under simulated microgravity or centrifuge with or without VD3. In simulated microgravity, luciferase activity was decreased after 48 h of clinorotation culture, but increased in the centrifuge culture. Luciferase activity was increased after VD3 treatment in normal conditions and simulated microgravity, the increase in luciferase activity in simulated microgravity was lower than that in the 1 g condition when simultaneously treated with VD3 and higher than that in the centrifuge condition. Co-immunoprecipitation showed that the interaction between the VD3 receptor (VDR) and Runx2 was decreased by simulated microgravity, but increased by centrifugation. From these results, we conclude that gravity affects the response of Runx2 to VD3 which results from an alteration in the interaction between VDR and Runx2 under different gravity conditions.
Importance of including ammonium sulfate ((NH4)2SO4) aerosols for ice cloud parameterization in GCMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharjee, P. S.; Sud, Yogesh C.; Liu, Xiaohong
2010-02-22
A common deficiency of many cloud-physics parameterizations including the NASA’s microphysics of clouds with aerosol- cloud interactions (hereafter called McRAS-AC) is that they simulate less (larger) than the observed ice cloud particle number (size). A single column model (SCM) of McRAS-AC and Global Circulation Model (GCM) physics together with an adiabatic parcel model (APM) for ice-cloud nucleation (IN) of aerosols were used to systematically examine the influence of ammonium sulfate ((NH4)2SO4) aerosols, not included in the present formulations of McRAS-AC. Specifically, the influence of (NH4)2SO4 aerosols on the optical properties of both liquid and ice clouds were analyzed. First anmore » (NH4)2SO4 parameterization was included in the APM to assess its effect vis-à-vis that of the other aerosols. Subsequently, several evaluation tests were conducted over the ARM-SGP and thirteen other locations (sorted into pristine and polluted conditions) distributed over marine and continental sites with the SCM. The statistics of the simulated cloud climatology were evaluated against the available ground and satellite data. The results showed that inclusion of (NH4)2SO4 in the SCM made a remarkable improvement in the simulated effective radius of ice clouds. However, the corresponding ice-cloud optical thickness increased more than is observed. This can be caused by lack of cloud advection and evaporation. We argue that this deficiency can be mitigated by adjusting the other tunable parameters of McRAS-AC such as precipitation efficiency. Inclusion of ice cloud particle splintering introduced through well- established empirical equations is found to further improve the results. Preliminary tests show that these changes make a substantial improvement in simulating the cloud optical properties in the GCM, particularly by simulating a far more realistic cloud distribution over the ITCZ.« less
Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid
2017-12-07
Simple and accurate expressions are presented for the equation of state (EOS) and absolute Helmholtz free energy of a system composed of simple atomic particles interacting through the repulsive Lennard-Jones potential model in the fluid and solid phases. The introduced EOS has 17 and 22 coefficients for fluid and solid phases, respectively, which are regressed to the Monte Carlo (MC) simulation data over the reduced temperature range of 0.6≤T * ≤6.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. The average absolute relative percent deviation in fitting the EOS parameters to the MC data is 0.06 and 0.14 for the fluid and solid phases, respectively. The thermodynamic integration method is used to calculate the free energy using the MC simulation results. The Helmholtz free energy of the ideal gas is employed as the reference state for the fluid phase. For the solid phase, the values of the free energy at the reduced density equivalent to the close-packed of a hard sphere are used as the reference state. To check the validity of the predicted values of the Helmholtz free energy, the Widom particle insertion method and the Einstein crystal technique of Frenkel and Ladd are employed. The results obtained from the MC simulation approaches are well agreed to the EOS results, which show that the proposed model can reliably be utilized in the framework of thermodynamic theories.
NASA Astrophysics Data System (ADS)
Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid
2017-12-01
Simple and accurate expressions are presented for the equation of state (EOS) and absolute Helmholtz free energy of a system composed of simple atomic particles interacting through the repulsive Lennard-Jones potential model in the fluid and solid phases. The introduced EOS has 17 and 22 coefficients for fluid and solid phases, respectively, which are regressed to the Monte Carlo (MC) simulation data over the reduced temperature range of 0.6 ≤T*≤6.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. The average absolute relative percent deviation in fitting the EOS parameters to the MC data is 0.06 and 0.14 for the fluid and solid phases, respectively. The thermodynamic integration method is used to calculate the free energy using the MC simulation results. The Helmholtz free energy of the ideal gas is employed as the reference state for the fluid phase. For the solid phase, the values of the free energy at the reduced density equivalent to the close-packed of a hard sphere are used as the reference state. To check the validity of the predicted values of the Helmholtz free energy, the Widom particle insertion method and the Einstein crystal technique of Frenkel and Ladd are employed. The results obtained from the MC simulation approaches are well agreed to the EOS results, which show that the proposed model can reliably be utilized in the framework of thermodynamic theories.
Simulating Silicon Photomultiplier Response to Scintillation Light
Jha, Abhinav K.; van Dam, Herman T.; Kupinski, Matthew A.; Clarkson, Eric
2015-01-01
The response of a Silicon Photomultiplier (SiPM) to optical signals is affected by many factors including photon-detection efficiency, recovery time, gain, optical crosstalk, afterpulsing, dark count, and detector dead time. Many of these parameters vary with overvoltage and temperature. When used to detect scintillation light, there is a complicated non-linear relationship between the incident light and the response of the SiPM. In this paper, we propose a combined discrete-time discrete-event Monte Carlo (MC) model to simulate SiPM response to scintillation light pulses. Our MC model accounts for all relevant aspects of the SiPM response, some of which were not accounted for in the previous models. We also derive and validate analytic expressions for the single-photoelectron response of the SiPM and the voltage drop across the quenching resistance in the SiPM microcell. These analytic expressions consider the effect of all the circuit elements in the SiPM and accurately simulate the time-variation in overvoltage across the microcells of the SiPM. Consequently, our MC model is able to incorporate the variation of the different SiPM parameters with varying overvoltage. The MC model is compared with measurements on SiPM-based scintillation detectors and with some cases for which the response is known a priori. The model is also used to study the variation in SiPM behavior with SiPM-circuit parameter variations and to predict the response of a SiPM-based detector to various scintillators. PMID:26236040
Sneessens, I; Veysset, P; Benoit, M; Lamadon, A; Brunschwig, G
2016-11-01
Crop-livestock production is claimed more sustainable than specialized production systems. However, the presence of controversial studies suggests that there must be conditions of mixing crop and livestock productions to allow for higher sustainable performances. Whereas previous studies focused on the impact of crop-livestock interactions on performances, we posit here that crop-livestock organization is a key determinant of farming system sustainability. Crop-livestock organization refers to the percentage of the agricultural area that is dedicated to each production. Our objective is to investigate if crop-livestock organization has both a direct and an indirect impact on mixed crop-livestock (MC-L) sustainability. In that objective, we build a whole-farm model parametrized on representative French sheep and crop farming systems in plain areas (Vienne, France). This model permits simulating contrasted MC-L systems and their subsequent sustainability through the following indicators of performance: farm income, production, N balance, greenhouse gas (GHG) emissions (/kg product) and MJ consumption (/kg product). Two MC-L systems were simulated with contrasted crop-livestock organizations (MC20-L80: 20% of crops; MC80-L20: 80% of crops). A first scenario - constraining no crop-livestock interactions in both MC-L systems - permits highlighting that crop-livestock organization has a significant direct impact on performances that implies trade-offs between objectives of sustainability. Indeed, the MC80-L20 system is showing higher performances for farm income (+44%), livestock production (+18%) and crop GHG emissions (-14%) whereas the MC20-L80 system has a better N balance (-53%) and a lower livestock MJ consumption (-9%). A second scenario - allowing for crop-livestock interactions in both MC20-L80 and MC80-L20 systems - stated that crop-livestock organization has a significant indirect impact on performances. Indeed, even if crop-livestock interactions permit improving performances, crop-livestock organization influences the capacity of MC-L systems to benefit from crop-livestock interactions. As a consequence, we observed a decreasing performance trade-off between MC-L systems for farm income (-4%) and crop GHG emissions (-10%) whereas the gap increases for nitrogen balance (+23%), livestock production (+6%) - MJ consumption (+16%) - GHG emissions (+5%) and crop MJ consumption (+5%). However, the indirect impact of crop-livestock organization doesn't reverse the trend of trade-offs between objectives of sustainability determined by the direct impact of crop-livestock organization. As a conclusion, crop-livestock organization is a key factor that has to be taken into account when studying the sustainability of mixed crop-livestock systems.
Fiorina, E; Ferrero, V; Pennazio, F; Baroni, G; Battistoni, G; Belcari, N; Cerello, P; Camarlinghi, N; Ciocca, M; Del Guerra, A; Donetti, M; Ferrari, A; Giordanengo, S; Giraudo, G; Mairani, A; Morrocchi, M; Peroni, C; Rivetti, A; Da Rocha Rolo, M D; Rossi, S; Rosso, V; Sala, P; Sportelli, G; Tampellini, S; Valvo, F; Wheadon, R; Bisogni, M G
2018-05-07
Hadrontherapy is a method for treating cancer with very targeted dose distributions and enhanced radiobiological effects. To fully exploit these advantages, in vivo range monitoring systems are required. These devices measure, preferably during the treatment, the secondary radiation generated by the beam-tissue interactions. However, since correlation of the secondary radiation distribution with the dose is not straightforward, Monte Carlo (MC) simulations are very important for treatment quality assessment. The INSIDE project constructed an in-beam PET scanner to detect signals generated by the positron-emitting isotopes resulting from projectile-target fragmentation. In addition, a FLUKA-based simulation tool was developed to predict the corresponding reference PET images using a detailed scanner model. The INSIDE in-beam PET was used to monitor two consecutive proton treatment sessions on a patient at the Italian Center for Oncological Hadrontherapy (CNAO). The reconstructed PET images were updated every 10 s providing a near real-time quality assessment. By half-way through the treatment, the statistics of the measured PET images were already significant enough to be compared with the simulations with average differences in the activity range less than 2.5 mm along the beam direction. Without taking into account any preferential direction, differences within 1 mm were found. In this paper, the INSIDE MC simulation tool is described and the results of the first in vivo agreement evaluation are reported. These results have justified a clinical trial, in which the MC simulation tool will be used on a daily basis to study the compliance tolerances between the measured and simulated PET images. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés
The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of themore » intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.« less
Intermediate-sized natural gas fueled carbonate fuel cell power plants
NASA Astrophysics Data System (ADS)
Sudhoff, Frederick A.; Fleming, Donald K.
1994-04-01
This executive summary of the report describes the accomplishments of the joint US Department of Energy's (DOE) Morgantown Energy Technology Center (METC) and M-C POWER Corporation's Cooperative Research and Development Agreement (CRADA) No. 93-013. This study addresses the intermediate power plant size between 2 megawatt (MW) and 200 MW. A 25 MW natural-gas, fueled-carbonate fuel cell power plant was chosen for this purpose. In keeping with recent designs, the fuel cell will operate under approximately three atmospheres of pressure. An expander/alternator is utilized to expand exhaust gas to atmospheric conditions and generate additional power. A steam-bottoming cycle is not included in this study because it is not believed to be cost effective for this system size. This study also addresses the simplicity and accuracy of a spreadsheet-based simulation with that of a full Advanced System for Process Engineering (ASPEN) simulation. The personal computer can fully utilize the simple spreadsheet model simulation. This model can be made available to all users and is particularly advantageous to the small business user.
Astronauts Grissom and Young in Gemini Mission Simulator
1964-05-22
S64-25295 (March 1964) --- Astronauts Virgil I. (Gus) Grissom (right) and John W. Young, prime crew for the first manned Gemini mission (GT-3), are shown inside a Gemini mission simulator at McDonnell Aircraft Corp., St. Louis, MO. The simulator will provide Gemini astronauts and ground crews with realistic mission simulation during intensive training prior to actual launch.
An assessment of 'shuffle algorithm' collision mechanics for particle simulations
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Boyd, Iain D.
1991-01-01
Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.
Simulation of temperature distribution in tumor Photothermal treatment
NASA Astrophysics Data System (ADS)
Zhang, Xiyang; Qiu, Shaoping; Wu, Shulian; Li, Zhifang; Li, Hui
2018-02-01
The light transmission in biological tissue and the optical properties of biological tissue are important research contents of biomedical photonics. It is of great theoretical and practical significance in medical diagnosis and light therapy of disease. In this paper, the temperature feedback-controller was presented for monitoring photothermal treatment in realtime. Two-dimensional Monte Carlo (MC) and diffuse approximation were compared and analyzed. The results demonstrated that diffuse approximation using extrapolated boundary conditions by finite element method is a good approximation to MC simulation. Then in order to minimize thermal damage, real-time temperature monitoring was appraised by proportional-integral-differential (PID) controller in the process of photothermal treatment.
Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.
2016-03-01
The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.
Transient in-plane thermal transport in nanofilms with internal heating
Cao, Bing-Yang
2016-01-01
Wide applications of nanofilms in electronics necessitate an in-depth understanding of nanoscale thermal transport, which significantly deviates from Fourier's law. Great efforts have focused on the effective thermal conductivity under temperature difference, while it is still ambiguous whether the diffusion equation with an effective thermal conductivity can accurately characterize the nanoscale thermal transport with internal heating. In this work, transient in-plane thermal transport in nanofilms with internal heating is studied via Monte Carlo (MC) simulations in comparison to the heat diffusion model and mechanism analyses using Fourier transform. Phonon-boundary scattering leads to larger temperature rise and slower thermal response rate when compared with the heat diffusion model based on Fourier's law. The MC simulations are also compared with the diffusion model with effective thermal conductivity. In the first case of continuous internal heating, the diffusion model with effective thermal conductivity under-predicts the temperature rise by the MC simulations at the initial heating stage, while the deviation between them gradually decreases and vanishes with time. By contrast, for the one-pulse internal heating case, the diffusion model with effective thermal conductivity under-predicts both the peak temperature rise and the cooling rate, so the deviation can always exist. PMID:27118903
Self-Consistent Monte Carlo Study of the Coulomb Interaction under Nano-Scale Device Structures
NASA Astrophysics Data System (ADS)
Sano, Nobuyuki
2011-03-01
It has been pointed that the Coulomb interaction between the electrons is expected to be of crucial importance to predict reliable device characteristics. In particular, the device performance is greatly degraded due to the plasmon excitation represented by dynamical potential fluctuations in high-doped source and drain regions by the channel electrons. We employ the self-consistent 3D Monte Carlo (MC) simulations, which could reproduce both the correct mobility under various electron concentrations and the collective plasma waves, to study the physical impact of dynamical potential fluctuations on device performance under the Double-gate MOSFETs. The average force experienced by an electron due to the Coulomb interaction inside the device is evaluated by performing the self-consistent MC simulations and the fixed-potential MC simulations without the Coulomb interaction. Also, the band-tailing associated with the local potential fluctuations in high-doped source region is quantitatively evaluated and it is found that the band-tailing becomes strongly dependent of position in real space even inside the uniform source region. This work was partially supported by Grants-in-Aid for Scientific Research B (No. 2160160) from the Ministry of Education, Culture, Sports, Science and Technology in Japan.
Transient in-plane thermal transport in nanofilms with internal heating.
Hua, Yu-Chao; Cao, Bing-Yang
2016-02-01
Wide applications of nanofilms in electronics necessitate an in-depth understanding of nanoscale thermal transport, which significantly deviates from Fourier's law. Great efforts have focused on the effective thermal conductivity under temperature difference, while it is still ambiguous whether the diffusion equation with an effective thermal conductivity can accurately characterize the nanoscale thermal transport with internal heating. In this work, transient in-plane thermal transport in nanofilms with internal heating is studied via Monte Carlo (MC) simulations in comparison to the heat diffusion model and mechanism analyses using Fourier transform. Phonon-boundary scattering leads to larger temperature rise and slower thermal response rate when compared with the heat diffusion model based on Fourier's law. The MC simulations are also compared with the diffusion model with effective thermal conductivity. In the first case of continuous internal heating, the diffusion model with effective thermal conductivity under-predicts the temperature rise by the MC simulations at the initial heating stage, while the deviation between them gradually decreases and vanishes with time. By contrast, for the one-pulse internal heating case, the diffusion model with effective thermal conductivity under-predicts both the peak temperature rise and the cooling rate, so the deviation can always exist.
Simulation of substrate degradation in composting of sewage sludge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Jun; Gao Ding, E-mail: gaod@igsnrr.ac.c; Chen Tongbin
2010-10-15
To simulate the substrate degradation kinetics of the composting process, this paper develops a mathematical model with a first-order reaction assumption and heat/mass balance equations. A pilot-scale composting test with a mixture of sewage sludge and wheat straw was conducted in an insulated reactor. The BVS (biodegradable volatile solids) degradation process, matrix mass, MC (moisture content), DM (dry matter) and VS (volatile solid) were simulated numerically by the model and experimental data. The numerical simulation offered a method for simulating k (the first-order rate constant) and estimating k{sub 20} (the first-order rate constant at 20 {sup o}C). After comparison withmore » experimental values, the relative error of the simulation value of the mass of the compost at maturity was 0.22%, MC 2.9%, DM 4.9% and VS 5.2%, which mean that the simulation is a good fit. The k of sewage sludge was simulated, and k{sub 20}, k{sub 20s} (first-order rate coefficient of slow fraction of BVS at 20 {sup o}C) of the sewage sludge were estimated as 0.082 and 0.015 d{sup -1}, respectively.« less
NASA Astrophysics Data System (ADS)
Owen, Cameron J.; Boles, Georgia C.; Chernyy, Valeriy; Bakker, Joost M.; Armentrout, P. B.
2018-01-01
A previous infrared multiple photon dissociation (IRMPD) action spectroscopy and density functional theory (DFT) study explored the structures of the [M,C,2H]+ products formed by dehydrogenation of methane by four, gas-phase 5d transition metal cations (M+ = Ta+, W+, Ir+, and Pt+). Complicating the analysis of these spectra for Ir and Pt was observation of an extra band in both spectra, not readily identified as a fundamental vibration. In an attempt to validate the assignment of these additional peaks, the present work examines the gas phase [M,C,2D]+ products of the same four metal ions formed by reaction with perdeuterated methane (CD4). As before, metal cations are formed in a laser ablation source and react with methane pulsed into a reaction channel downstream, and the resulting products are spectroscopically characterized through photofragmentation using the free-electron laser for intracavity experiments in the 350-1800 cm-1 range. Photofragmentation was monitored by the loss of D for [Ta,C,2D]+ and [W,C,2D]+ and of D2 in the case of [Pt,C,2D]+ and [Ir,C,2D]+. Comparison of the experimental spectra and DFT calculated spectra leads to structural assignments for all [M,C,2H/2D]+ systems that are consistent with previous identifications and allows a full description of the systematic spectroscopic shifts observed for deuterium labeling of these complexes, some of the smallest systems to be studied using IRMPD action spectroscopy. Further, full rotational contours are simulated for each vibrational band and explain several observations in the present spectra, such as doublet structures in several bands as well as the observed linewidths. The prominent extra bands in the [Pt,C,2D/2H]+ spectra appear to be most consistent with an overtone of the out-of-plane bending vibration of the metal carbene cation structure.
Astronaut William S. McArthur in training for contingency EVA in WETF
1993-09-10
S93-43840 (6 Sept 1993) --- Astronaut William S. McArthur, mission specialist, participates in training for contingency Extravehicular Activity (EVA) for the STS-58 mission. For simulation purposes, McArthur was about to be submerged to a point of neutral buoyancy in the Johnson Space Center's (JSC) Weightless Environment Training Facility (WET-F). Though the Spacelab Life Sciences (SLS-2) mission does not include a planned EVA, all crews designate members to learn proper procedures to perform outside the spacecraft in the event of failure of remote means to accomplish those tasks.
New features in McStas, version 1.5
NASA Astrophysics Data System (ADS)
Åstrand, P.-O.; Lefmann, K.; Farhi, E.; Nielsen, K.; Skårup, P.
The neutron ray-tracing simulation package McStas has attracted numerous users, and the development of the package continues with version 1.5 released at the ICNS 2001 conference. New features include: support for neutron polarisation, labelling of neutrons, realistic source and sample components, and interface to the Riso instrument-control software TASCOM. We give a general introduction to McStas and present the latest developments. In particular, we give an example of how the neutron-label option has been used to locate the origin of a spurious side-peak, observed in an experiment with RITA-1 at Riso.
Astronaut William McArthur prepares for a training exercise
1993-07-20
S93-38679 (20 July 1993) --- Wearing a training version of the partial pressure launch and entry garment, astronaut William S. McArthur listens to a briefing on emergency egress procedures for the STS-58 mission. McArthur, along with five other NASA astronauts and a visiting payload specialist assigned to the seven member crew, later rehearsed contingency evacuation procedures. Most of the training session took place in the crew compartment and full fuselage trainers of the Space Shuttle mockup and integration laboratory.
Business Simulations in Financial Management Courses: Implications for Higher Education
ERIC Educational Resources Information Center
Wolmarans, H. P.
2006-01-01
Business simulations provide a teaching method that typically yields (1) more hands-on experience, (2) a higher level of excitement, (3) a higher noise level (and yet a lower incidence of problems), and (4) more commitment than traditional methods of teaching (McLure 1997, 3). Business simulations are experiential learning opportunities that have…
Acevedo, Orlando; Jorgensen, William L
2010-01-19
Application of combined quantum and molecular mechanical (QM/MM) methods focuses on predicting activation barriers and the structures of stationary points for organic and enzymatic reactions. Characterization of the factors that stabilize transition structures in solution and in enzyme active sites provides a basis for design and optimization of catalysts. Continued technological advances allowed for expansion from prototypical cases to mechanistic studies featuring detailed enzyme and condensed-phase environments with full integration of the QM calculations and configurational sampling. This required improved algorithms featuring fast QM methods, advances in computing changes in free energies including free-energy perturbation (FEP) calculations, and enhanced configurational sampling. In particular, the present Account highlights development of the PDDG/PM3 semi-empirical QM method, computation of multi-dimensional potentials of mean force (PMF), incorporation of on-the-fly QM in Monte Carlo (MC) simulations, and a polynomial quadrature method for efficient modeling of proton-transfer reactions. The utility of this QM/MM/MC/FEP methodology is illustrated for a variety of organic reactions including substitution, decarboxylation, elimination, and pericyclic reactions. A comparison to experimental kinetic results on medium effects has verified the accuracy of the QM/MM approach in the full range of solvents from hydrocarbons to water to ionic liquids. Corresponding results from ab initio and density functional theory (DFT) methods with continuum-based treatments of solvation reveal deficiencies, particularly for protic solvents. Also summarized in this Account are three specific QM/MM applications to biomolecular systems: (1) a recent study that clarified the mechanism for the reaction of 2-pyrone derivatives catalyzed by macrophomate synthase as a tandem Michael-aldol sequence rather than a Diels-Alder reaction, (2) elucidation of the mechanism of action of fatty acid amide hydrolase (FAAH), an unusual Ser-Ser-Lys proteolytic enzyme, and (3) the construction of enzymes for Kemp elimination of 5-nitrobenzisoxazole that highlights the utility of QM/MM in the design of artificial enzymes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hueso-Gonzalez, F; Vijande, J; Ballester, F
Purpose: Tissue heterogeneities and calcifications have significant impact on the dosimetry of low energy brachytherapy (BT). RayStretch is an analytical algorithm developed in our institution to incorporate heterogeneity corrections in LDR prostate brachytherapy. The aim of this work is to study its application in clinical cases by comparing its predictions with the results obtained with TG-43 and Monte Carlo (MC) simulations. Methods: A clinical implant (71 I-125 seeds, 15 needles) from a real patient was considered. On this patient, different volumes with calcifications were considered. Its properties were evaluated in three ways by i) the Treatment planning system (TPS) (TG-43),more » ii) a MC study using the Penelope2009 code, and iii) RayStretch. To analyse the performance of RayStretch, calcifications located in the prostate lobules covering 11% of the total prostate volume and larger calcifications located in the lobules and underneath the urethra for a total occupied volume of 30% were considered. Three mass densities (1.05, 1.20, and 1.35 g/cm3) were explored for the calcifications. Therefore, 6 different scenarios ranging from small low density calcifications to large high density ones have been discussed. Results: DVH and D90 results given by RayStretch agree within 1% with the full MC simulations. Although no effort has been done to improve RayStretch numerical performance, its present implementation is able to evaluate a clinical implant in a few seconds to the same level of accuracy as a detailed MC calculation. Conclusion: RayStretch is a robust method for heterogeneity corrections in prostate BT supported on TG-43 data. Its compatibility with commercial TPSs and its high calculation speed makes it feasible for use in clinical settings for improving treatment quality. It will allow in a second phase of this project, its use during intraoperative ultrasound planning. This study was partly supported by a fellowship grant from the Spanish Ministry of Education, by the Generalitat Valenciana under Project PROMETEOII/2013/010, by the Spanish Government under Project No. FIS2013-42156 and by the European Commission within the SeventhFramework Program through ENTERVISION (grant agreement number 264552).« less
SU-F-T-610: Comparison of Output Factors for Small Radiation Fields Used in SBRT Treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, R; Eldib, A; Li, J
2016-06-15
Purpose: In order to fundamentally understand our previous dose verification results between measurements and calculations from treatment planning system (TPS) for SBRT plans for different sized targets, the goal of the present work was to compare output factors for small fields measured using EDR2 films with TPS and Monet Carlo (MC) simulations. Methods: 6MV beam was delivered to EDR2 films for each of the following field sizes; 1×1 cm{sup 2}, 1.5×1.5 cm{sup 2}, 2×2 cm{sup 2}, 3×3 cm{sup 2}, 4×4 cm{sup 2}, 5×5 cm{sup 2} and 10×10 cm{sup 2}. The films were developed in a film processer, then scanned withmore » a Vidar VXR-16 scanner and analyzed using RIT113 version 6.1. A standard calibration curve was obtained with the 6MV beam and was used to get absolute dose for measured field sizes. Similar plans for all fields sizes mentioned above were generated using Eclipse with the Analytical Anisotropic Algorithm. Similarly, MC simulations were carried out using the MCSIM, an in-house MC code for different field sizes. Output factors normalized to 10×10 cm{sup 2} reference field were calculated for different field sizes in all the three cases and compared. Results: For field sizes ranging from 1×1 cm{sup 2} to 2×2 cm{sup 2}, the differences in output factors between measurements (films), TPS and MC simulations were within 0.22%. For field sizes ranging from 3×3cm{sup 2} to 5×5cm{sup 2}, differences in output factors were within 0.10%. Conclusion: No clinically significant difference was obtained in output factors for different field sizes acquired from films, TPS and MC simulations. Our results showed that the output factors are predicted accurately from TPS when compared to the actual measurements and superior dose calculation Monte Carlo method. This study would help us in understanding our previously obtained dose verification results for small fields used in the SBRT treatment.« less
NASA Astrophysics Data System (ADS)
Aziz Hashikin, Nurul Ab; Yeong, Chai-Hong; Guatelli, Susanna; Jeet Abdullah, Basri Johan; Ng, Kwan-Hoong; Malaroda, Alessandra; Rosenfeld, Anatoly; Perkins, Alan Christopher
2017-09-01
We aimed to investigate the validity of the partition model (PM) in estimating the absorbed doses to liver tumour ({{D}T} ), normal liver tissue ({{D}NL} ) and lungs ({{D}L} ), when cross-fire irradiations between these compartments are being considered. MIRD-5 phantom incorporated with various treatment parameters, i.e. tumour involvement (TI), tumour-to-normal liver uptake ratio (T/N) and lung shunting (LS), were simulated using the Geant4 Monte Carlo (MC) toolkit. 108 track histories were generated for each combination of the three parameters to obtain the absorbed dose per activity uptake in each compartment (DT{{AT}} , DNL{{ANL}} , and DL{{AL}} ). The administered activities, A were estimated using PM, so as to achieve either limiting doses to normal liver, DNLlim or lungs, ~DLlim (70 or 30 Gy, respectively). Using these administered activities, the activity uptake in each compartment ({{A}T} , {{A}NL} , and {{A}L} ) was estimated and multiplied with the absorbed dose per activity uptake attained using the MC simulations, to obtain the actual dose received by each compartment. PM overestimated {{D}L} by 11.7% in all cases, due to the escaped particles from the lungs. {{D}T} and {{D}NL} by MC were largely affected by T/N, which were not considered by PM due to cross-fire exclusion at the tumour-normal liver boundary. These have resulted in the overestimation of {{D}T} by up to 8% and underestimation of {{D}NL} by as high as -78%, by PM. When DNLlim was estimated via PM, the MC simulations showed significantly higher {{D}NL} for cases with higher T/N, and LS ⩽ 10%. All {{D}L} and {{D}T} by MC were overestimated by PM, thus DLlim were never exceeded. PM leads to inaccurate dose estimations due to the exclusion of cross-fire irradiation, i.e. between the tumour and normal liver tissue. Caution should be taken for cases with higher TI and T/N, and lower LS, as they contribute to major underestimation of {{D}NL} . For {{D}L} , a different correction factor for dose calculation may be used for improved accuracy.
Singh, Anamika; Dirain, Marvin; Witek, Rachel; Rocca, James R.; Edison, Arthur S; Haskell-Luevano, Carrie
2013-01-01
The melanocortin-3 (MC3) and melanocortin-4 (MC4) receptors regulate energy homeostasis, food intake, and associated physiological conditions. The MC4R has been studied extensively. Less is known about specific physiological roles of the MC3R. A major obstacle to this lack of knowledge is attributed to a limited number of identified MC3R selective ligands. We previously reported a spatial scanning approach of a 10-membered thioether-heterocycle ring incorporated into a chimeric peptide template that identified a lead nM MC4R ligand. Based upon those results, 17 compounds were designed and synthesized that focused upon modification in the pharmacophore domain. Notable results include the identification of a 0.13 nM potent 5800-fold mMC3R selective antagonist/slight partial agonist versus a 760 nM mMC4R full agonist (ligand 11). Biophysical experiments (2D 1H NMR and computer assisted molecular modeling) of this ligand resulted in the identification of an inverse γ-turn secondary structure in the ligand pharmacophore domain. PMID:23432160
NASA Astrophysics Data System (ADS)
Tarasov, A. P.; Egorov, A. I.; Rogatkin, D. A.
2017-07-01
Using multidetector computed tomography, thicknesses of bone squame and soft tissues of human head were assessed. MC simulation revealed impropriety of source-detector separation distances for 3 oximeters, which can cause extracerebral contamination.
Mixing of Isotactic and Syndiotactic Polypropylenes in the Melt
DOE Office of Scientific and Technical Information (OSTI.GOV)
CLANCY,THOMAS C.; PUTZ,MATHIAS; WEINHOLD,JEFFREY D.
2000-07-14
The miscibility of polypropylene (PP) melts in which the chains differ only in stereochemical composition has been investigated by two different procedures. One approach used detailed local information from a Monte Carlo simulation of a single chain, and the other approach takes this information from a rotational isomeric state model devised decades ago, for another purpose. The first approach uses PRISM theory to deduce the intermolecular packing in the polymer blend, while the second approach uses a Monte Carlo simulation of a coarse-grained representation of independent chains, expressed on a high-coordination lattice. Both approaches find a positive energy change uponmore » mixing isotactic PP (iPP) and syndiotactic polypropylene (sPP) chains in the melt. This conclusion is qualitatively consistent with observations published recently by Muelhaupt and coworkers. The size of the energy chain on mixing is smaller in the MC/PRISM approach than in the RIS/MC simulation, with the smaller energy change being in better agreement with the experiment. The RIS/MC simulation finds no demixing for iPP and atactic polypropylene (aPP) in the melt, consistent with several experimental observations in the literature. The demixing of the iPP/sPP blend may arise from attractive interactions in the sPP melt that are disrupted when the sPP chains are diluted with aPP or iPP chains.« less
Monte Carlo simulations of backscattering process in dislocation-containing SrTiO3 single crystal
NASA Astrophysics Data System (ADS)
Jozwik, P.; Sathish, N.; Nowicki, L.; Jagielski, J.; Turos, A.; Kovarik, L.; Arey, B.
2014-05-01
Studies of defects formation in crystals are of obvious importance in electronics, nuclear engineering and other disciplines where materials are exposed to different forms of irradiation. Rutherford Backscattering/Channeling (RBS/C) and Monte Carlo (MC) simulations are the most convenient tool for this purpose, as they allow one to determine several features of lattice defects: their type, concentration and damage accumulation kinetic. On the other hand various irradiation conditions can be efficiently modeled by ion irradiation method without leading to the radioactivity of the sample. Combination of ion irradiation with channeling experiment and MC simulations appears thus as a most versatile method in studies of radiation damage in materials. The paper presents the results on such a study performed on SrTiO3 (STO) single crystals irradiated with 320 keV Ar ions. The samples were analyzed also by using HRTEM as a complementary method which enables the measurement of geometrical parameters of crystal lattice deformation in the vicinity of dislocations. Once the parameters and their variations within the distance of several lattice constants from the dislocation core are known, they may be used in MC simulations for the quantitative determination of dislocation depth distribution profiles. The final outcome of the deconvolution procedure are cross-sections values calculated for two types of defects observed (RDA and dislocations).
Mak, Chi H
2015-11-25
While single-stranded (ss) segments of DNAs and RNAs are ubiquitous in biology, details about their structures have only recently begun to emerge. To study ssDNA and RNAs, we have developed a new Monte Carlo (MC) simulation using a free energy model for nucleic acids that has the atomisitic accuracy to capture fine molecular details of the sugar-phosphate backbone. Formulated on the basis of a first-principle calculation of the conformational entropy of the nucleic acid chain, this free energy model correctly reproduced both the long and short length-scale structural properties of ssDNA and RNAs in a rigorous comparison against recent data from fluorescence resonance energy transfer, small-angle X-ray scattering, force spectroscopy and fluorescence correlation transport measurements on sequences up to ∼100 nucleotides long. With this new MC algorithm, we conducted a comprehensive investigation of the entropy landscape of small RNA stem-loop structures. From a simulated ensemble of ∼10(6) equilibrium conformations, the entropy for the initiation of different size RNA hairpin loops was computed and compared against thermodynamic measurements. Starting from seeded hairpin loops, constrained MC simulations were then used to estimate the entropic costs associated with propagation of the stem. The numerical results provide new direct molecular insights into thermodynaimc measurement from macroscopic calorimetry and melting experiments.
Design of a digital phantom population for myocardial perfusion SPECT imaging research.
Ghaly, Michael; Du, Yong; Fung, George S K; Tsui, Benjamin M W; Links, Jonathan M; Frey, Eric
2014-06-21
Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.
Design of a digital phantom population for myocardial perfusion SPECT imaging research
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Du, Yong; Fung, George S. K.; Tsui, Benjamin M. W.; Links, Jonathan M.; Frey, Eric
2014-06-01
Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altsybeev, Igor
2016-01-22
In the present work, Monte-Carlo toy model with repulsing quark-gluon strings in hadron-hadron collisions is described. String repulsion creates transverse boosts for the string decay products, giving modifications of observables. As an example, long-range correlations between mean transverse momenta of particles in two observation windows are studied in MC toy simulation of the heavy-ion collisions.
Vertical Temperature Simulation of Pegasus Runway, McMurdo Station, Antarctica
2015-01-01
Report Approved for public release; distribution is unlimited. Prepared for National Science Foundation , Division of Polar Programs, Antarctic...45 ERDC/CRREL TR-15-2 vii Preface This study was conducted for the National Science Foundation (NSF), Di- vision of Polar...Development Center GPR Ground-Penetrating Radar MIS McMurdo Ice Self NSF National Science Foundation PIR Precision Infrared Radiometer PLR Division of
SU-E-T-535: Proton Dose Calculations in Homogeneous Media.
Chapman, J; Fontenot, J; Newhauser, W; Hogstrom, K
2012-06-01
To develop a pencil beam dose calculation algorithm for scanned proton beams that improves modeling of scatter events. Our pencil beam algorithm (PBA) was developed for calculating dose from monoenergetic, parallel proton beams in homogeneous media. Fermi-Eyges theory was implemented for pencil beam transport. Elastic and nonelastic scatter effects were each modeled as a Gaussian distribution, with root mean square (RMS) widths determined from theoretical calculations and a nonlinear fit to a Monte Carlo (MC) simulated 1mm × 1mm proton beam, respectively. The PBA was commissioned using MC simulations in a flat water phantom. Resulting PBA calculations were compared with results of other models reported in the literature on the basis of differences between PBA and MC calculations of 80-20% penumbral widths. Our model was further tested by comparing PBA and MC results for oblique beams (45 degree incidence) and surface irregularities (step heights of 1 and 4 cm) for energies of 50-250 MeV and field sizes of 4cm × 4cm and 10cm × 10cm. Agreement between PBA and MC distributions was quantified by computing the percentage of points within 2% dose difference or 1mm distance to agreement. Our PBA improved agreement between calculated and simulated penumbral widths by an order of magnitude compared with previously reported values. For comparisons of oblique beams and surface irregularities, agreement between PBA and MC distributions was better than 99%. Our algorithm showed improved accuracy over other models reported in the literature in predicting the overall shape of the lateral profile through the Bragg peak. This improvement was achieved by incorporating nonelastic scatter events into our PBA. The increased modeling accuracy of our PBA, incorporated into a treatment planning system, may improve the reliability of treatment planning calculations for patient treatments. This research was supported by contract W81XWH-10-1-0005 awarded by The U.S. Army Research Acquisition Activity, 820 Chandler Street, Fort Detrick, MD 21702-5014. This report does not necessarily reflect the position or policy of the Government, and no official endorsement should be inferred. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randeniya, S; Mirkovic, D; Titt, U
2014-06-01
Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Ofmore » the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations. Research supported by National Cancer Institute grant P01CA021239.« less
Dynamic multi-coil tailored excitation for transmit B1 correction at 7 Tesla.
Umesh Rudrapatna, S; Juchem, Christoph; Nixon, Terence W; de Graaf, Robin A
2016-07-01
Tailored excitation (TEx) based on interspersing multiple radio frequency pulses with linear gradient and higher-order shim pulses can be used to obtain uniform flip angle in the presence of large radio frequency transmission (B 1+) inhomogeneity. Here, an implementation of dynamic, multislice tailored excitation using the recently developed multi-coil nonlinear shim hardware (MC-DTEx) is reported. MC-DTEx was developed and tested both in a phantom and in vivo at 7 T, and its efficacy was quantitatively assessed. Predicted outcomes of MC-DTEx and DTEx based on spherical harmonic shims (SH-DTEx) were also compared. For a planned 30 ° flip angle, in a phantom, the standard deviation in excitation improved from 28% (regular excitation) to 12% with MC-DTEx. The SD in in vivo excitation improved from 22 to 12%. The improvements achieved with experimental MC-DTEx closely matched the theoretical predictions. Simulations further showed that MC-DTEx outperforms SH-DTEx for both scenarios. Successful implementation of multislice MC-DTEx is presented and is shown to be capable of homogenizing excitation over more than twofold B 1+ variations. Its benefits over SH-DTEx are also demonstrated. A distinct advantage of MC hardware over SH shim hardware is the absence of significant eddy current effects, which allows for a straightforward, multislice implementation of MC-DTEx. Magn Reson Med 76:83-93, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.
Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood
2016-01-01
Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.
Should adhesive debonding be simulated for intra-radicular post stress analyses?
Caldas, Ricardo A; Bacchi, Atais; Barão, Valentim A R; Versluis, Antheunis
2018-06-23
Elucidate the influence of debonding on stress distribution and maximum stresses for intra-radicular restorations. Five intra-radicular restorations were analyzed by finite element analysis (FEA): MP=metallic cast post core; GP=glass fiber post core; PP=pre-fabricated metallic post core; RE=resin endocrowns; CE=single piece ceramic endocrown. Two cervical preparations were considered: no ferule (f 0 ) and 2mm ferule (f 1 ). The simulation was conducted in three steps: (1) intact bonds at all contacts; (2) bond failure between crown and tooth; (3) bond failure among tooth, post and crown interfaces. Contact friction and separation between interfaces was modeled where bond failure occurred. Mohr-Coulomb stress ratios (σ MC ratio ) and fatigue safety factors (SF) for dentin structure were compared with published strength values, fatigue life, and fracture patterns of teeth with intra-radicular restorations. The σ MC ratio showed no differences among models at first step. The second step increased σ MC ratio at the ferule compared to step 1. At the third step, the σ MC ratio and SF for f 0 models were highly influenced by post material. CE and RE models had the highest values for σ MC ratio and lower SF. MP had the lowest σ MC ratio and higher SF. The f 1 models showed no relevant differences among them at the third step. FEA most closely predicted failure performance of intra-radicular posts when frictional contact was modeled. Results of analyses where all interfaces are assumed to be perfectly bonded should be considered with caution. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.
Featured Image: Mixing Chemicals in Stars
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-10-01
How do stars mix chemicals in their interiors, leading to the abundances we measure at their surfaces? Two scientists from the Planetary Science Institute in Arizona, Tamara Rogers (Newcastle University, UK) and Jim McElwaine (Durham University, UK), have investigated the role that internal gravity waves have in chemical mixing in stellar interiors. Internal gravity waves not to be confused with the currently topical gravitational waves are waves that oscillate within a fluid that has a density gradient. Rogers and McElwaine used simulations to explore how these waves can cause particles in a stars interior to move around, gradually mixing the different chemical elements. Snapshots from four different times in their simulation can be seen below, with the white dots marking tracer particles and the colors indicating vorticity. You can see how the particles move in response to wave motion after the first panel. For more information, check out the paper below!CitationT. M. Rogers and J. N. McElwaine 2017 ApJL 848 L1. doi:10.3847/2041-8213/aa8d13
NASA Astrophysics Data System (ADS)
Bieda, Bogusław; Grzesik, Katarzyna
2017-11-01
The study proposes an stochastic approach based on Monte Carlo (MC) simulation for life cycle assessment (LCA) method limited to life cycle inventory (LCI) study for rare earth elements (REEs) recovery from the secondary materials processes production applied to the New Krankberg Mine in Sweden. The MC method is recognizes as an important tool in science and can be considered the most effective quantification approach for uncertainties. The use of stochastic approach helps to characterize the uncertainties better than deterministic method. Uncertainty of data can be expressed through a definition of probability distribution of that data (e.g. through standard deviation or variance). The data used in this study are obtained from: (i) site-specific measured or calculated data, (ii) values based on literature, (iii) the ecoinvent process "rare earth concentrate, 70% REO, from bastnäsite, at beneficiation". Environmental emissions (e.g, particulates, uranium-238, thorium-232), energy and REE (La, Ce, Nd, Pr, Sm, Dy, Eu, Tb, Y, Sc, Yb, Lu, Tm, Y, Gd) have been inventoried. The study is based on a reference case for the year 2016. The combination of MC analysis with sensitivity analysis is the best solution for quantified the uncertainty in the LCI/LCA. The reliability of LCA results may be uncertain, to a certain degree, but this uncertainty can be noticed with the help of MC method.
The High performance of nanocrystalline CVD diamond coated hip joints in wear simulator test.
Maru, M M; Amaral, M; Rodrigues, S P; Santos, R; Gouvea, C P; Archanjo, B S; Trommer, R M; Oliveira, F J; Silva, R F; Achete, C A
2015-09-01
The superior biotribological performance of nanocrystalline diamond (NCD) coatings grown by a chemical vapor deposition (CVD) method was already shown to demonstrate high wear resistance in ball on plate experiments under physiological liquid lubrication. However, tests with a close-to-real approach were missing and this constitutes the aim of the present work. Hip joint wear simulator tests were performed with cups and heads made of silicon nitride coated with NCD of ~10 μm in thickness. Five million testing cycles (Mc) were run, which represent nearly five years of hip joint implant activity in a patient. For the wear analysis, gravimetry, profilometry, scanning electron microscopy and Raman spectroscopy techniques were used. After 0.5 Mc of wear test, truncation of the protruded regions of the NCD film happened as a result of a fine-scale abrasive wear mechanism, evolving to extensive plateau regions and highly polished surface condition (Ra<10nm). Such surface modification took place without any catastrophic features as cracking, grain pullouts or delamination of the coatings. A steady state volumetric wear rate of 0.02 mm(3)/Mc, equivalent to a linear wear of 0.27 μm/Mc favorably compares with the best performance reported in the literature for the fourth generation alumina ceramic (0.05 mm(3)/Mc). Also, squeaking, quite common phenomenon in hard-on-hard systems, was absent in the present all-NCD system. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard
2007-06-01
The simultaneous use of the Reaction Ensemble Monte Carlo (ReMC) method and the Adaptative Erpenbeck EOS (AE-EOS) method allows us to calculate direclty the thermodynamical and chemical equilibrium of a mixture on the hugoniot curve. The ReMC method allow to reach chemical equilibrium of detonation products and the AE-EOS method constraints ths system to satisfy the Hugoniot relation. Once the Crussard curve of detonation products has been established, CJ state properties may be calculated. An additional NPT simulation is performed at CJ conditions in order to compute derivative thermodynamic quantities like Cp, Cv, Gruneisen gama, sound velocity, and compressibility factor. Several explosives has been studied, of which PETN, nitromethane, tetranitromethane, and hexanitroethane. In these first simulations, solid carbon is eventually treated using an EOS.
A simulation model of IT risk on program trading
NASA Astrophysics Data System (ADS)
Xia, Bingying; Jiang, Wenbao; Luo, Guangxuan
2015-12-01
The biggest difficulty for Program trading IT risk measures lies in the loss of data, in view of this situation, the current scholars approach is collecting court, network and other public media such as all kinds of accident of IT both at home and abroad for data collection, and the loss of IT risk quantitative analysis based on this database. However, the IT risk loss database established by this method can only fuzzy reflect the real situation and not for real to make fundamental explanation. In this paper, based on the study of the concept and steps of the MC simulation, we use computer simulation method, by using the MC simulation method in the "Program trading simulation system" developed by team to simulate the real programming trading and get the IT risk loss of data through its IT failure experiment, at the end of the article, on the effectiveness of the experimental data is verified. In this way, better overcome the deficiency of the traditional research method and solves the problem of lack of IT risk data in quantitative research. More empirically provides researchers with a set of simulation method are used to study the ideas and the process template.
Fiorini, Francesca; Schreuder, Niek; Van den Heuvel, Frank
2018-02-01
Cyclotron-based pencil beam scanning (PBS) proton machines represent nowadays the majority and most affordable choice for proton therapy facilities, however, their representation in Monte Carlo (MC) codes is more complex than passively scattered proton system- or synchrotron-based PBS machines. This is because degraders are used to decrease the energy from the cyclotron maximum energy to the desired energy, resulting in a unique spot size, divergence, and energy spread depending on the amount of degradation. This manuscript outlines a generalized methodology to characterize a cyclotron-based PBS machine in a general-purpose MC code. The code can then be used to generate clinically relevant plans starting from commercial TPS plans. The described beam is produced at the Provision Proton Therapy Center (Knoxville, TN, USA) using a cyclotron-based IBA Proteus Plus equipment. We characterized the Provision beam in the MC FLUKA using the experimental commissioning data. The code was then validated using experimental data in water phantoms for single pencil beams and larger irregular fields. Comparisons with RayStation TPS plans are also presented. Comparisons of experimental, simulated, and planned dose depositions in water plans show that same doses are calculated by both programs inside the target areas, while penumbrae differences are found at the field edges. These differences are lower for the MC, with a γ(3%-3 mm) index never below 95%. Extensive explanations on how MC codes can be adapted to simulate cyclotron-based scanning proton machines are given with the aim of using the MC as a TPS verification tool to check and improve clinical plans. For all the tested cases, we showed that dose differences with experimental data are lower for the MC than TPS, implying that the created FLUKA beam model is better able to describe the experimental beam. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Traffic accident simulation : final report.
DOT National Transportation Integrated Search
1992-06-01
The purpose of this research was to determine if HVOSM (Highway Vehicle Object Simulation Model) could be used to model a vehicle with a modern front (or rear) suspension system such as a McPherson strut and have the results of the dynamic model be v...
NASA Astrophysics Data System (ADS)
Lin, Y.; Wukitch, S. J.; Edlund, E.; Ennever, P.; Hubbard, A. E.; Porkolab, M.; Rice, J.; Wright, J.
2017-10-01
In recent three-ion species (majority D and H plus a trace level of 3He) ICRF heating experiments on Alcator C-Mod, double mode conversion on both sides of the 3He cyclotron resonance has been observed using the phase contrast imaging (PCI) system. The MC locations are used to estimate the species concentrations in the plasma. Simulation using TORIC shows that with the 3He level <1%, most RF power is absorbed by the 3He ions and the process can generate energetic 3He ions. In mode conversion (MC) flow drive experiment in D(3He) plasma at 8 T, MC waves were also monitored by PCI. The MC ion cyclotron wave (ICW) amplitude and wavenumber kR have been found to correlate with the flow drive force. The MC efficiency, wave-number k of the MC ICW and their dependence on plasma parameters like Te0 have been studied. Based on the experimental observation and numerical study of the dispersion solutions, a hypothesis of the flow drive mechanism has been proposed.
Siebers, Jeffrey V
2008-04-04
Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.
Proton-induced x-ray fluorescence CT imaging
Bazalova-Carter, Magdalena; Ahmad, Moiz; Matsuura, Taeko; Takao, Seishin; Matsuo, Yuto; Fahrig, Rebecca; Shirato, Hiroki; Umegaki, Kikuo; Xing, Lei
2015-01-01
Purpose: To demonstrate the feasibility of proton-induced x-ray fluorescence CT (pXFCT) imaging of gold in a small animal sized object by means of experiments and Monte Carlo (MC) simulations. Methods: First, proton-induced gold x-ray fluorescence (pXRF) was measured as a function of gold concentration. Vials of 2.2 cm in diameter filled with 0%–5% Au solutions were irradiated with a 220 MeV proton beam and x-ray fluorescence induced by the interaction of protons, and Au was detected with a 3 × 3 mm2 CdTe detector placed at 90° with respect to the incident proton beam at a distance of 45 cm from the vials. Second, a 7-cm diameter water phantom containing three 2.2-diameter vials with 3%–5% Au solutions was imaged with a 7-mm FWHM 220 MeV proton beam in a first generation CT scanning geometry. X-rays scattered perpendicular to the incident proton beam were acquired with the CdTe detector placed at 45 cm from the phantom positioned on a translation/rotation stage. Twenty one translational steps spaced by 3 mm at each of 36 projection angles spaced by 10° were acquired, and pXFCT images of the phantom were reconstructed with filtered back projection. A simplified geometry of the experimental data acquisition setup was modeled with the MC TOPAS code, and simulation results were compared to the experimental data. Results: A linear relationship between gold pXRF and gold concentration was observed in both experimental and MC simulation data (R2 > 0.99). All Au vials were apparent in the experimental and simulated pXFCT images. Specifically, the 3% Au vial was detectable in the experimental [contrast-to-noise ratio (CNR) = 5.8] and simulated (CNR = 11.5) pXFCT image. Due to fluorescence x-ray attenuation in the higher concentration vials, the 4% and 5% Au contrast were underestimated by 10% and 15%, respectively, in both the experimental and simulated pXFCT images. Conclusions: Proton-induced x-ray fluorescence CT imaging of 3%–5% gold solutions in a small animal sized water phantom has been demonstrated for the first time by means of experiments and MC simulations. PMID:25652502
Electrolytes in a nanometer slab-confinement: Ion-specific structure and solvation forces
NASA Astrophysics Data System (ADS)
Kalcher, Immanuel; Schulz, Julius C. F.; Dzubiella, Joachim
2010-10-01
We study the liquid structure and solvation forces of dense monovalent electrolytes (LiCl, NaCl, CsCl, and NaI) in a nanometer slab-confinement by explicit-water molecular dynamics (MD) simulations, implicit-water Monte Carlo (MC) simulations, and modified Poisson-Boltzmann (PB) theories. In order to consistently coarse-grain and to account for specific hydration effects in the implicit methods, realistic ion-ion and ion-surface pair potentials have been derived from infinite-dilution MD simulations. The electrolyte structure calculated from MC simulations is in good agreement with the corresponding MD simulations, thereby validating the coarse-graining approach. The agreement improves if a realistic, MD-derived dielectric constant is employed, which partially corrects for (water-mediated) many-body effects. Further analysis of the ionic structure and solvation pressure demonstrates that nonlocal extensions to PB (NPB) perform well for a wide parameter range when compared to MC simulations, whereas all local extensions mostly fail. A Barker-Henderson mapping of the ions onto a charged, asymmetric, and nonadditive binary hard-sphere mixture shows that the strength of structural correlations is strongly related to the magnitude and sign of the salt-specific nonadditivity. Furthermore, a grand canonical NPB analysis shows that the Donnan effect is dominated by steric correlations, whereas solvation forces and overcharging effects are mainly governed by ion-surface interactions. However, steric corrections to solvation forces are strongly repulsive for high concentrations and low surface charges, while overcharging can also be triggered by steric interactions in strongly correlated systems. Generally, we find that ion-surface and ion-ion correlations are strongly coupled and that coarse-grained methods should include both, the latter nonlocally and nonadditive (as given by our specific ionic diameters), when studying electrolytes in highly inhomogeneous situations.
On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.
Thomson, Rowan M; Kawrakow, Iwan
2011-08-01
The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.
SU-F-T-672: A Novel Kernel-Based Dose Engine for KeV Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reinhart, M; Fast, M F; Nill, S
2016-06-15
Purpose: Mimicking state-of-the-art patient radiotherapy with high precision irradiators for small animals allows advanced dose-effect studies and radiobiological investigations. One example is the implementation of pre-clinical IMRT-like irradiations, which requires the development of inverse planning for keV photon beams. As a first step, we present a novel kernel-based dose calculation engine for keV x-rays with explicit consideration of energy and material dependencies. Methods: We follow a superposition-convolution approach adapted to keV x-rays, based on previously published work on micro-beam therapy. In small animal radiotherapy, we assume local energy deposition at the photon interaction point, since the electron ranges in tissuemore » are of the same order of magnitude as the voxel size. This allows us to use photon-only kernel sets generated by MC simulations, which are pre-calculated for six energy windows and ten base materials. We validate our stand-alone dose engine against Geant4 MC simulations for various beam configurations in water, slab phantoms with bone and lung inserts, and on a mouse CT with (0.275mm)3 voxels. Results: We observe good agreement for all cases. For field sizes of 1mm{sup 2} to 1cm{sup 2} in water, the depth dose curves agree within 1% (mean), with the largest deviations in the first voxel (4%) and at depths>5cm (<2.5%). The out-of-field doses at 1cm depth agree within 8% (mean) for all but the smallest field size. In slab geometries, the mean agreement was within 3%, with maximum deviations of 8% at water-bone interfaces. The γ-index (1mm/1%) passing rate for a single-field mouse irradiation is 71%. Conclusion: The presented dose engine yields an accurate representation of keV-photon doses suitable for inverse treatment planning for IMRT. It has the potential to become a significantly faster yet sufficiently accurate alternative to full MC simulations. Further investigations will focus on energy sampling as well as calculation times. Research at ICR is also supported by Cancer Research UK under Programme C33589/A19727 and NHS funding to the NIHR Biomedical Research Centre at RMH and ICR. MFF is supported by Cancer Research UK under Programme C33589/A19908.« less
NASA Astrophysics Data System (ADS)
Bauer, J.; Unholtz, D.; Kurz, C.; Parodi, K.
2013-08-01
We report on the experimental campaign carried out at the Heidelberg Ion-Beam Therapy Center (HIT) to optimize the Monte Carlo (MC) modelling of proton-induced positron-emitter production. The presented experimental strategy constitutes a pragmatic inverse approach to overcome the known uncertainties in the modelling of positron-emitter production due to the lack of reliable cross-section data for the relevant therapeutic energy range. This work is motivated by the clinical implementation of offline PET/CT-based treatment verification at our facility. Here, the irradiation induced tissue activation in the patient is monitored shortly after the treatment delivery by means of a commercial PET/CT scanner and compared to a MC simulated activity expectation, derived under the assumption of a correct treatment delivery. At HIT, the MC particle transport and interaction code FLUKA is used for the simulation of the expected positron-emitter yield. For this particular application, the code is coupled to externally provided cross-section data of several proton-induced reactions. Studying experimentally the positron-emitting radionuclide yield in homogeneous phantoms provides access to the fundamental production channels. Therefore, five different materials have been irradiated by monoenergetic proton pencil beams at various energies and the induced β+ activity subsequently acquired with a commercial full-ring PET/CT scanner. With the analysis of dynamically reconstructed PET images, we are able to determine separately the spatial distribution of different radionuclide concentrations at the starting time of the PET scan. The laterally integrated radionuclide yields in depth are used to tune the input cross-section data such that the impact of both the physical production and the imaging process on the various positron-emitter yields is reproduced. The resulting cross-section data sets allow to model the absolute level of measured β+ activity induced in the investigated targets within a few per cent. Moreover, the simulated distal activity fall-off positions, representing the central quantity for treatment monitoring in terms of beam range verification, are found to agree within 0.6 mm with the measurements at different initial beam energies in both homogeneous and heterogeneous targets. Based on work presented at the Third European Workshop on Monte Carlo Treatment Planning (Seville, 15-18 May 2012).
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Evans, K. J.
2017-12-01
Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.
NASA Astrophysics Data System (ADS)
Bellos, Vasilis; Tsakiris, George
2016-09-01
The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nusrat, H; Pang, G; Sarfehnia, A
Purpose: This work seeks to develop a beam quality meter using multiple differently doped plastic scintillators that are thus intrinsically beam-quality dependent. Plastic scintillators spontaneously emit visible light upon irradiation; the amount of light produced is dependent on stopping power (closely related to LET) according to Birks’ law. Doping plastic scintillators can be used to tune their sensitivity to specific LET ranges. Methods: GEANT4.10.1 Monte Carlo (MC) was used to evaluate the response of various scintillator dopant combinations. MC radiation transport and scintillator light response were validated against previously published literature. Current work involves evaluating detector response experimentally; to thatmore » end, a detector prototype with interchangeable scintillator housing was constructed. Measurement set-up guides light emitted by the scintillator to a photomultiplier tube via a glass taper junction coupled to an optical fiber. The resulting signal is measured by an electrometer, and normalized to dose readout from a diode. Measurements have been done using clinical electron and orthovoltage beams. MC response (simulated scintillator light normalized to dose scored inside the scintillating volume) was evaluated for four different LET radiations for an undoped and 1%Pb doped scintillator (σ=0.85%). Simulated incident electrons included: 0.05, 0.1, 0.2, 6, 12, and 18 MeV; these energies correspond to a range of stopping power (related to LET) values ranging from 1.824 to 11.09 MeVcm{sup 2}g{sup −1} (SCOL from NIST-ESTAR). Results: Initial MC results show a distinct divergence in scintillator response as LET increases. The response for undoped plastic scintillator indicated a 35.0% increase in signal when going from 18 MeV (low LET) to 0.05 MeV (high LET) while 1%-Pb doped scintillator indicated a 100.9% increase. Conclusion: After validating MC against measurement, simulations will be used to test various concentrations (2%, 4%, 6%) of different high-Z material dopants (W, Mo) to optimize the scintillator types for the beam quality meter. NSERC Discovery Grant RGPIN-435608.« less
Gu, Yanqing; Wang, Qing; Cui, Weiding; Fan, Weimin
2012-01-01
Background Recent studies have shown that the acetabular component frequently becomes deformed during press-fit insertion. The aim of this study was to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the Durom large head metal-on-metal (MOM) total hips in simulators. Methods Six Durom cups impacted into reamed acetabula of fresh cadavers were used as the experimental group and another 6 size-paired intact Durom cups constituted the control group. All 12 Durom MOM total hips were put through a 3 million cycle (MC) wear test in simulators. Results The 6 cups in the experimental group were all deformed, with a mean deformation of 41.78±8.86 µm. The average volumetric wear rate in the experimental group and in the control group in the first million cycle was 6.65±0.29 mm3/MC and 0.89±0.04 mm3/MC (t = 48.43, p = 0.000). The ion levels of Cr and Co in the experimental group were also higher than those in the control group before 2.0 MC. However there was no difference in the ion levels between 2.0 and 3.0 MC. Conclusions This finding implies that the non-modular acetabular component of Durom total hip prosthesis is likely to become deformed during press-fit insertion, and that the deformation will result in increased volumetric wear and increased ion release. Clinical Relevance This study was determined to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the prosthesis. Deformation of the cup after implantation increases the wear of MOM bearings and the resulting ion levels. The clinical use of the Durom large head prosthesis should be with great care. PMID:23144694
Liu, Feng; Chen, Zhefeng; Gu, Yanqing; Wang, Qing; Cui, Weiding; Fan, Weimin
2012-01-01
Recent studies have shown that the acetabular component frequently becomes deformed during press-fit insertion. The aim of this study was to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the Durom large head metal-on-metal (MOM) total hips in simulators. Six Durom cups impacted into reamed acetabula of fresh cadavers were used as the experimental group and another 6 size-paired intact Durom cups constituted the control group. All 12 Durom MOM total hips were put through a 3 million cycle (MC) wear test in simulators. The 6 cups in the experimental group were all deformed, with a mean deformation of 41.78 ± 8.86 µm. The average volumetric wear rate in the experimental group and in the control group in the first million cycle was 6.65 ± 0.29 mm(3)/MC and 0.89 ± 0.04 mm(3)/MC (t = 48.43, p = 0.000). The ion levels of Cr and Co in the experimental group were also higher than those in the control group before 2.0 MC. However there was no difference in the ion levels between 2.0 and 3.0 MC. This finding implies that the non-modular acetabular component of Durom total hip prosthesis is likely to become deformed during press-fit insertion, and that the deformation will result in increased volumetric wear and increased ion release. This study was determined to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the prosthesis. Deformation of the cup after implantation increases the wear of MOM bearings and the resulting ion levels. The clinical use of the Durom large head prosthesis should be with great care.
NASA Astrophysics Data System (ADS)
Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid
2018-03-01
Based on Wertheim's second order thermodynamic perturbation theory (TPT2), equations of state (EOSs) are presented for the fluid and solid phases of tangent, freely jointed spheres. It is considered that the spheres interact with each other through the Weeks-Chandler-Anderson (WCA) potential. The developed TPT2 EOS is the sum of a monomeric reference term and a perturbation contribution due to bonding. MC NVT simulations are performed to determine the structural properties of the reference system in the reduced temperature range of 0.6 ≤ T* ≤ 4.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. Mathematical functions are fitted to the simulation results of the reference system and employed in the framework of Wertheim's theory to develop TPT2 EOSs for the fluid and solid phases. The extended EOSs are compared to the MC NPT simulation results of the compressibility factor and internal energy of the fully flexible chain systems. Simulations are performed for the WCA chain system for chain lengths of up to 15 at T* = 1.0, 1.5, 2.0, 3.0. Across all the reduced temperatures, the agreement between the results of the TPT2 EOS and MC simulations is remarkable. Overall Average Absolute Relative Percent Deviation at T* = 1.0 for the compressibility factor in the entire chain lengths we covered is 0.51 and 0.77 for the solid and fluid phases, respectively. Similar features are observed in the case of residual internal energy.
Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid
2018-03-14
Based on Wertheim's second order thermodynamic perturbation theory (TPT2), equations of state (EOSs) are presented for the fluid and solid phases of tangent, freely jointed spheres. It is considered that the spheres interact with each other through the Weeks-Chandler-Anderson (WCA) potential. The developed TPT2 EOS is the sum of a monomeric reference term and a perturbation contribution due to bonding. MC NVT simulations are performed to determine the structural properties of the reference system in the reduced temperature range of 0.6 ≤ T* ≤ 4.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. Mathematical functions are fitted to the simulation results of the reference system and employed in the framework of Wertheim's theory to develop TPT2 EOSs for the fluid and solid phases. The extended EOSs are compared to the MC NPT simulation results of the compressibility factor and internal energy of the fully flexible chain systems. Simulations are performed for the WCA chain system for chain lengths of up to 15 at T* = 1.0, 1.5, 2.0, 3.0. Across all the reduced temperatures, the agreement between the results of the TPT2 EOS and MC simulations is remarkable. Overall Average Absolute Relative Percent Deviation at T* = 1.0 for the compressibility factor in the entire chain lengths we covered is 0.51 and 0.77 for the solid and fluid phases, respectively. Similar features are observed in the case of residual internal energy.
Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-02-01
The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.
Toward GPGPU accelerated human electromechanical cardiac simulations
Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David
2014-01-01
In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492
Performance Analysis of HF Band FB-MC-SS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussein Moradi; Stephen Andrew Laraway; Behrouz Farhang-Boroujeny
Abstract—In a recent paper [1] the filter bank multicarrier spread spectrum (FB-MC-SS) waveform was proposed for wideband spread spectrum HF communications. A significant benefit of this waveform is robustness against narrow and partial band interference. Simulation results in [1] demonstrated good performance in a wideband HF channel over a wide range of conditions. In this paper we present a theoretical analysis of the bit error probably for this system. Our analysis tailors the results from [2] where BER performance was analyzed for maximum ration combining systems that accounted for correlation between subcarriers and channel estimation error. Equations are give formore » BER that closely match the simulated performance in most situations.« less
McStas-model of the delft SESANS
NASA Astrophysics Data System (ADS)
Knudsen, E.; Udby, L.; Willendrup, P. K.; Lefmann, K.; Bouwman, W. G.
2011-06-01
We present simulation results taking first virtual data from a model of the Spin-Echo Small Angle Scattering (SESANS) instrument situated in Delft, in the framework of the McStas Monte Carlo software package. The main focus has been on making a model of the Delft SESANS instrument, and we can now present the first virtual data from it, using a refracting prism-like sample model. In consequence, polarisation instrumentation is now included natively in the McStas kernel, including options for magnetic fields and a number of utility components. This development has brought us to a point where realistic models of polarisation-enabled instrumentation can be built.
Real-time simulator for designing electron dual scattering foil systems.
Carver, Robert L; Hogstrom, Kenneth R; Price, Michael J; LeBlanc, Justin D; Pitcher, Garrett M
2014-11-08
The purpose of this work was to develop a user friendly, accurate, real-time com- puter simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator allows for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator also is a powerful educational tool. The simulator consists of an analytical algorithm for calculating electron fluence and X-ray dose and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with the reduced Gaussian formalism for scattering powers. The simulator also estimates central-axis and off-axis X-ray dose arising from the dual foil system. Once the geometry of the accelerator is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scat- tering foil material and Gaussian shape (thickness and sigma), and beam energy. The off-axis electron relative fluence or total dose profile and central-axis X-ray dose contamination are computed and displayed in real time. The simulator was validated by comparison of off-axis electron relative fluence and X-ray percent dose profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV, using present foils on an Elekta radiotherapy accelerator, the simulator was able to reproduce MC profiles to within 2% out to 20 cm from the central axis. The central-axis X-ray percent dose predictions matched measured data to within 0.5%. The calculation time was approximately 100 ms using a single Intel 2.93 GHz processor, which allows for real-time variation of foil geometrical parameters using slider bars. This work demonstrates how the user-friendly GUI and real-time nature of the simulator make it an effective educational tool for gaining a better understanding of the effects that various system parameters have on a relative dose profile. This work also demonstrates a method for using the simulator as a design tool for creating custom dual scattering foil systems in the clinical range of beam energies (6-20 MeV).
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
Data decomposition of Monte Carlo particle transport simulations via tally servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less
Peter, Emanuel K; Shea, Joan-Emma; Pivkin, Igor V
2016-05-14
In this paper, we present a coarse replica exchange molecular dynamics (REMD) approach, based on kinetic Monte Carlo (kMC). The new development significantly can reduce the amount of replicas and the computational cost needed to enhance sampling in protein simulations. We introduce 2 different methods which primarily differ in the exchange scheme between the parallel ensembles. We apply this approach on folding of 2 different β-stranded peptides: the C-terminal β-hairpin fragment of GB1 and TrpZip4. Additionally, we use the new simulation technique to study the folding of TrpCage, a small fast folding α-helical peptide. Subsequently, we apply the new methodology on conformation changes in signaling of the light-oxygen voltage (LOV) sensitive domain from Avena sativa (AsLOV2). Our results agree well with data reported in the literature. In simulations of dialanine, we compare the statistical sampling of the 2 techniques with conventional REMD and analyze their performance. The new techniques can reduce the computational cost of REMD significantly and can be used in enhanced sampling simulations of biomolecules.
Metabolite-cycled STEAM and semi-LASER localization for MR spectroscopy of the human brain at 9.4T.
Giapitzakis, Ioannis-Angelos; Shao, Tingting; Avdievich, Nikolai; Mekle, Ralf; Kreis, Roland; Henning, Anke
2018-04-01
Metabolite cycling (MC) is an MRS technique for the simultaneous acquisition of water and metabolite spectra that avoids chemical exchange saturation transfer effects and for which water may serve as a reference signal or contain additional information in functional or diffusion studies. Here, MC was developed for human investigations at ultrahigh field. MC-STEAM and MC-semi-LASER are introduced at 9.4T with an optimized inversion pulse and elaborate coil setup. Experimental and simulation results are given for the implementation of adiabatic inversion pulses for MC. The two techniques are compared, and the effect of frequency and phase correction based on the MC water spectra is evaluated. Finally, absolute quantification of metabolites is performed. The proposed coil configuration results in a maximum B1 + of 48 μΤ in a voxel within the occipital lobe. Frequency and phase correction of single acquisitions improve signal-to-noise ratio (SNR) and linewidth, leading to high-resolution spectra. The improvement of SNR of N-acetylaspartate (SNR NAA ) for frequency aligned data, acquired with MC-STEAM and MC-semi-LASER, are 37% and 30%, respectively (P < 0.05). Moreover, a doubling of the SNR NAA for MC-semi-LASER in comparison with MC-STEAM is observed (P < 0.05). Concentration levels for 18 metabolites from the human occipital lobe are reported, as acquired with both MC-STEAM and MC-semi-LASER. This work introduces a novel methodology for single-voxel MRS on a 9.4T whole-body scanner and highlights the advantages of semi-LASER compared to STEAM in terms of excitation profile. In comparison with MC-STEAM, MC-semi-LASER yields spectra with higher SNR. Magn Reson Med 79:1841-1850, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
2011-12-01
REMD while reproducing the energy landscape of explicit solvent simulations . ’ INTRODUCTION Molecular dynamics (MD) simulations of proteins can pro...Mongan, J.; McCammon, J. A. Accelerated molecular dynamics : a promising and efficient simulation method for biomolecules. J. Chem. Phys. 2004, 120 (24...Chemical Theory and Computation ARTICLE (8) Abraham,M. J.; Gready, J. E. Ensuringmixing efficiency of replica- exchange molecular dynamics simulations . J
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-29
... Federal Motor Carrier Safety Administration (FMCSA) as motor carriers of passengers (license nos. MC... the FMCSA as a motor carrier of passengers (MC-324772) and holds an intrastate registration from the... state that Sundiego operates 58 full-sized motor coaches and 9 smaller vehicles (including minibuses...
STS-92 Mission Specialist McArthur has his launch and entry suit adjusted
NASA Technical Reports Server (NTRS)
2000-01-01
In the Operations and Checkout Building, STS-92 Mission Specialist William S. McArthur Jr. has the gloves on his launch and entry suit adjusted during fit check. McArthur and the rest of the crew are at KSC for Terminal Countdown Demonstration Test activities. The TCDT provides emergency egress training, simulated countdown exercises and opportunities to inspect the mission payload. This mission will be McArthur's third Shuttle flight. STS-92 is scheduled to launch Oct. 5 at 9:38 p.m. EDT from Launch Pad 39A on the fifth flight to the International Space Station. It will carry two elements of the Space Station, the Integrated Truss Structure Z1 and the third Pressurized Mating Adapter. The mission is also the 100th flight in the Shuttle program.
A Study of Neutron Leakage in Finite Objects
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.
NASA Astrophysics Data System (ADS)
Amenomori, M.; Bi, X. J.; Chen, D.; Chen, T. L.; Chen, W. Y.; Cui, S. W.; Danzengluobu; Ding, L. K.; Feng, C. F.; Feng, Zhaoyang; Feng, Z. Y.; Gou, Q. B.; Guo, Y. Q.; He, H. H.; He, Z. T.; Hibino, K.; Hotta, N.; Hu, Haibing; Hu, H. B.; Huang, J.; Jia, H. Y.; Jiang, L.; Kajino, F.; Kasahara, K.; Katayose, Y.; Kato, C.; Kawata, K.; Kozai, M.; Labaciren; Le, G. M.; Li, A. F.; Li, H. J.; Li, W. J.; Liu, C.; Liu, J. S.; Liu, M. Y.; Lu, H.; Meng, X. R.; Miyazaki, T.; Munakata, K.; Nakajima, T.; Nakamura, Y.; Nanjo, H.; Nishizawa, M.; Niwa, T.; Ohnishi, M.; Ohta, I.; Ozawa, S.; Qian, X. L.; Qu, X. B.; Saito, T.; Saito, T. Y.; Sakata, M.; Sako, T. K.; Shao, J.; Shibata, M.; Shiomi, A.; Shirai, T.; Sugimoto, H.; Takita, M.; Tan, Y. H.; Tateyama, N.; Torii, S.; Tsuchiya, H.; Udo, S.; Wang, H.; Wu, H. R.; Xue, L.; Yamamoto, Y.; Yamauchi, K.; Yang, Z.; Yuan, A. F.; Zhai, L. M.; Zhang, H. M.; Zhang, J. L.; Zhang, X. Y.; Zhang, Y.; Zhang, Yi; Zhang, Ying; Zhaxisangzhu; Zhou, X. X.; Tibet ASγ Collaboration
2018-06-01
We examine the possible influence of Earth-directed coronal mass ejections (ECMEs) on the Sun’s shadow in the 3 TeV cosmic-ray intensity observed by the Tibet-III air shower (AS) array. We confirm a clear solar-cycle variation of the intensity deficit in the Sun’s shadow during ten years between 2000 and 2009. This solar-cycle variation is overall reproduced by our Monte Carlo (MC) simulations of the Sun’s shadow based on the potential field model of the solar magnetic field averaged over each solar rotation period. We find, however, that the magnitude of the observed intensity deficit in the Sun’s shadow is significantly less than that predicted by MC simulations, particularly during the period around solar maximum when a significant number of ECMEs is recorded. The χ 2 tests of the agreement between the observations and the MC simulations show that the difference is larger during the periods when the ECMEs occur, and the difference is reduced if the periods of ECMEs are excluded from the analysis. This suggests the first experimental evidence of the ECMEs affecting the Sun’s shadow observed in the 3 TeV cosmic-ray intensity.
Supernova Driving. II. Compressive Ratio in Molecular-cloud Turbulence
NASA Astrophysics Data System (ADS)
Pan, Liubin; Padoan, Paolo; Haugbølle, Troels; Nordlund, Åke
2016-07-01
The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of the compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.
NASA Astrophysics Data System (ADS)
Wang, Xin; Utsumi, Motoo; Yang, Yingnan; Li, Dawei; Zhao, Yingxin; Zhang, Zhenya; Feng, Chuanping; Sugiura, Norio; Cheng, Jay Jiayang
2015-01-01
A novel photocatalyst AgBr/Ag3PO4/TiO2 was developed by a simple facile in situ deposition method and used for degradation of mirocystin-LR. TiO2 (P25) as a cost effective chemical was used to improve the stability of AgBr/Ag3PO4 under simulated solar light irradiation. The photocatalytic activity tests for this heterojunction were conducted under simulated solar light irradiation using methyl orange as targeted pollutant. The results indicated that the optimal Ag to Ti molar ratio for the photocatalytic activity of the resulting heterojunction AgBr/Ag3PO4/TiO2 was 1.5 (named as 1.5 BrPTi), which possessed higher photocatalytic capacity than AgBr/Ag3PO4. The 1.5 BrPTi heterojunction was also more stable than AgBr/Ag3PO4 in photocatalysis. This highly efficient and relatively stable photocatalyst was further tested for degradation of the hepatotoxin microcystin-LR (MC-LR). The results suggested that MC-LR was much more easily degraded by 1.5 BrPTi than by AgBr/Ag3PO4. The quenching effects of different scavengers proved that reactive h+ and •OH played important roles for MC-LR degradation.
Acceleration of Monte Carlo SPECT simulation using convolution-based forced detection
NASA Astrophysics Data System (ADS)
de Jong, H. W. A. M.; Slijpen, E. T. P.; Beekman, F. J.
2001-02-01
Monte Carlo (MC) simulation is an established tool to calculate photon transport through tissue in Emission Computed Tomography (ECT). Since the first appearance of MC a large variety of variance reduction techniques (VRT) have been introduced to speed up these notoriously slow simulations. One example of a very effective and established VRT is known as forced detection (FD). In standard FD the path from the photon's scatter position to the camera is chosen stochastically from the appropriate probability density function (PDF), modeling the distance-dependent detector response. In order to speed up MC the authors propose a convolution-based FD (CFD) which involves replacing the sampling of the PDF by a convolution with a kernel which depends on the position of the scatter event. The authors validated CFD for parallel-hole Single Photon Emission Computed Tomography (SPECT) using a digital thorax phantom. Comparison of projections estimated with CFD and standard FD shows that both estimates converge to practically identical projections (maximum bias 0.9% of peak projection value), despite the slightly different photon paths used in CFD and standard FD. Projections generated with CFD converge, however, to a noise-free projection up to one or two orders of magnitude faster, which is extremely useful in many applications such as model-based image reconstruction.
1988-04-13
Simulation: An Artificial Intelligence Approach to System Modeling and Automating the Simulation Life Cycle Mark S. Fox, Nizwer Husain, Malcolm...McRoberts and Y.V.Reddy CMU-RI-TR-88-5 Intelligent Systems Laboratory The Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania D T T 13...years of research in the application of Artificial Intelligence to Simulation. Our focus has been in two areas: the use of Al knowledge representation
Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki
2018-05-01
We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.
STS-31 crewmembers review checklist with instructor on JSC's FB-SMS middeck
NASA Technical Reports Server (NTRS)
1988-01-01
STS-31 Discovery, Orbiter Vehicle (OV) 103, Mission Specialist (MS) Bruce McCandless II (left) and Pilot Charles F. Bolden (right) discuss procedures with a training instructor on the middeck of JSC's fixed-based (FB) Shuttle Mission Simulator (SMS). The three are pointing to a checklist during this training simulation in the Mission Simulation and Training Facility Bldg 5.
A Methodology to Assess UrbanSim Scenarios
2012-09-01
Education LOE – Line of Effort MMOG – Massively Multiplayer Online Game MC3 – Maneuver Captain’s Career Course MSCCC – Maneuver Support...augmented reality simulations, increased automation and artificial intelligence simulation, and massively multiplayer online games (MMOG), among...distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Turn-based strategy games and simulations are vital tools for military
JSC-1: Lunar Simulant of Choice for Geotechnical Applications and Oxygen Production
NASA Technical Reports Server (NTRS)
Taylor, Lawrence A.; Hill, Eddy; Liu, Yang; Day, James M. D.
2005-01-01
Lunar simulant JSC-1 was produced as the result of a workshop held in 1991 to evaluate the status of simulated lunar material and to make recommendations on future requirements and production of such material (McKay et al., 1991). JSC-1 was prepared from a welded tuff that was mined, crushed, and sized from the Pleistocene San Francisco volcanic field, northern Arizona. As the initial production of approxiamtely 12,300kgs is nearly depleted, new production has commenced. The mineralogy and chemical properties of JSC-1 are described in McKay et al. (1994) and Hill et al. (this volume); description of its geotechnical properties appears in Klosky et al. (1996). Although other lunar-soil simulants have been produced (e.g., MLS-1: Weiblen et al., 1990; Desai et al., 1992; Chua et al., 1994), they have not been as well standardized as JSC-I; this makes it difficult to standardize results from tests performed on these simulants. Here, we provide an overview of the composition, mineralogy, strength and deformation properties, and potential uses of JSC-1 and outline why it is presently the 'lunar simulant of choice' for geotechnical applications and as a proxy for lunar-oxygen production.
NASA Astrophysics Data System (ADS)
Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.
2010-08-01
This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.
In vitro Dosimetric Study of Biliary Stent Loaded with Radioactive 125I Seeds
Yao, Li-Hong; Wang, Jun-Jie; Shang, Charles; Jiang, Ping; Lin, Lei; Sun, Hai-Tao; Liu, Lu; Liu, Hao; He, Di; Yang, Rui-Jie
2017-01-01
Background: A novel radioactive 125I seed-loaded biliary stent has been used for patients with malignant biliary obstruction. However, the dosimetric characteristics of the stents remain unclear. Therefore, we aimed to describe the dosimetry of the stents of different lengths — with different number as well as activities of 125I seeds. Methods: The radiation dosimetry of three representative radioactive stent models was evaluated using a treatment planning system (TPS), thermoluminescent dosimeter (TLD) measurements, and Monte Carlo (MC) simulations. In the process of TPS calculation and TLD measurement, two different water-equivalent phantoms were designed to obtain cumulative radial dose distribution. Calibration procedures using TLD in the designed phantom were also conducted. MC simulations were performed using the Monte Carlo N-Particle eXtended version 2.5 general purpose code to calculate the radioactive stent's three-dimensional dose rate distribution in liquid water. Analysis of covariance was used to examine the factors influencing radial dose distribution of the radioactive stent. Results: The maximum reduction in cumulative radial dose was 26% when the seed activity changed from 0.5 mCi to 0.4 mCi for the same length of radioactive stents. The TLD's dose response in the range of 0–10 mGy irradiation by 137Cs γ-ray was linear: y = 182225x − 6651.9 (R2= 0.99152; y is the irradiation dose in mGy, x is the TLDs’ reading in nC). When TLDs were irradiated by different energy radiation sources to a dose of 1 mGy, reading of TLDs was different. Doses at a distance of 0.1 cm from the three stents’ surface simulated by MC were 79, 93, and 97 Gy. Conclusions: TPS calculation, TLD measurement, and MC simulation were performed and were found to be in good agreement. Although the whole experiment was conducted in water-equivalent phantom, data in our evaluation may provide a theoretical basis for dosimetry for the clinical application. PMID:28469106
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Department of Engineering Physics, Tsinghua University, Beijing; Tian, Z
Purpose: Acuros BV has become available to perform accurate dose calculations in high-dose-rate (HDR) brachytherapy with phantom heterogeneity considered by solving the Boltzmann transport equation. In this work, we performed validation studies regarding the dose calculation accuracy of Acuros BV in cases with a shielded cylinder applicator using Monte Carlo (MC) simulations. Methods: Fifteen cases were considered in our studies, covering five different diameters of the applicator and three different shielding degrees. For each case, a digital phantom was created in Varian BrachyVision with the cylinder applicator inserted in the middle of a large water phantom. A treatment plan withmore » eight dwell positions was generated for these fifteen cases. Dose calculations were performed with Acuros BV. We then generated a voxelized phantom of the same geometry, and the materials were modeled according to the vendor’s specifications. MC dose calculations were then performed using our in-house developed fast MC dose engine for HDR brachytherapy (gBMC) on a GPU platform, which is able to simulate both photon transport and electron transport in a voxelized geometry. A phase-space file for the Ir-192 HDR source was used as a source model for MC simulations. Results: Satisfactory agreements between the dose distributions calculated by Acuros BV and those calculated by gBMC were observed in all cases. Quantitatively, we computed point-wise dose difference within the region that receives a dose higher than 10% of the reference dose, defined to be the dose at 5mm outward away from the applicator surface. The mean dose difference was ∼0.45%–0.51% and the 95-percentile maximum difference was ∼1.24%–1.47%. Conclusion: Acuros BV is able to accurately perform dose calculations in HDR brachytherapy with a shielded cylinder applicator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Shen, C; Tian, Z
Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less
Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An
2017-11-08
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Gao, Lili
2017-01-01
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096
NASA Astrophysics Data System (ADS)
Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud
2018-04-01
Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.
Li, Chang-Hua; Chiang, Chih-Pin; Yang, Jun-Yi; Ma, Chia-Jou; Chen, Yu-Chan; Yen, Hungchen Emilie
2014-07-01
RING-type copines are a small family of plant-specific RING-type ubiquitin ligases. They contain an N-terminal myristoylation site for membrane anchoring, a central copine domain for substrate recognition, and a C-terminal RING domain for E2 docking. RING-type copine McCPN1 (copine1) from halophyte ice plant (Mesembryanthemum crystallinum L.) was previously identified from a salt-induced cDNA library. In this work, we characterize the activity, expression, and localization of McCPN1 in ice plant. An in vitro ubiquitination assay of McCPN1 was performed using two ice plant UBCs, McUBC1 and McUBC2, characterized from the same salt-induced cDNA library. The results showed that McUBC2, a member of the UBC8 family, stimulated the autoubiquitination activity of McCPN1, while McUBC1, a homolog of the UBC35 family, did not. The results indicate that McCPN1 has selective E2-dependent E3 ligase activity. We found that McCPN1 localizes primarily on the plasma membrane and in the nucleus of plant cells. Under salt stress, the accumulation of McCPN1 in the roots increases. A yeast two-hybrid screen was used to search for potential McCPN1-interacting partners using a library constructed from salt-stressed ice plants. Screening with full-length McCPN1 identified several independent clones containing partial Argonaute 4 (AGO4) sequence. Subsequent agro-infiltration, protoplast two-hybrid analysis, and bimolecular fluorescence complementation assay confirmed that McCPN1 and AGO4 interacted in vivo in the nucleus of plant cells. The possible involvement of a catalyzed degradation of AGO4 by McCPN1 in response to salt stress is discussed. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline simulations with time delay compensation show that both novel predictors effectively suppress the large spikes caused by the McFarland compensator. The phase errors of the three predictors are not significant. The adaptive predictor yields greater gain errors than the McFarland predictor for short delays (96 and 138 ms), but shows smaller errors for long delays (186 and 282 ms). The advantage of the adaptive predictor becomes more obvious for a longer time delay. Conversely, the state space predictor results in substantially smaller gain error than the other two predictors for all four delay cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzidakis, Stylianos; Greulich, Christopher
A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.
Mason, J; Al-Qaisieh, B; Bownes, P; Henry, A; Thwaites, D
2013-03-01
In permanent seed implant prostate brachytherapy the actual dose delivered to the patient may be less than that calculated by TG-43U1 due to interseed attenuation (ISA) and differences between prostate tissue composition and water. In this study the magnitude of the ISA effect is assessed in a phantom and in clinical prostate postimplant cases. Results are compared for seed models 6711 and 9011 with 0.8 and 0.5 mm diameters, respectively. A polymethyl methacrylate (PMMA) phantom was designed to perform ISA measurements in a simple eight-seed arrangement and at the center of an implant of 36 seeds. Monte Carlo (MC) simulation and experimental measurements using a MOSFET dosimeter were used to measure dose rate and the ISA effect. MC simulations of 15 CT-based postimplant prostate treatment plans were performed to compare the clinical impact of ISA on dose to prostate, urethra, rectum, and the volume enclosed by the 100% isodose, for 6711 and 9011 seed models. In the phantom, ISA reduced the dose rate at the MOSFET position by 8.6%-18.3% (6711) and 7.8%-16.7% (9011) depending on the measurement configuration. MOSFET measured dose rates agreed with MC simulation predictions within the MOSFET measurement uncertainty, which ranged from 5.5% to 7.2% depending on the measurement configuration (k = 1, for the mean of four measurements). For 15 clinical implants, the mean ISA effect for 6711 was to reduce prostate D90 by 4.2 Gy (3%), prostate V100 by 0.5 cc (1.4%), urethra D10 by 11.3 Gy (4.4%), rectal D2cc by 5.5 Gy (4.6%), and the 100% isodose volume by 2.3 cc. For the 9011 seed the mean ISA effect reduced prostate D90 by 2.2 Gy (1.6%), prostate V100 by 0.3 cc (0.7%), urethra D10 by 8.0 Gy (3.2%), rectal D2cc by 3.1 Gy (2.7%), and the 100% isodose volume by 1.2 cc. Differences between the MC simulation and TG-43U1 consensus data for the 6711 seed model had a similar impact, reducing mean prostate D90 by 6 Gy (4.2%) and V100 by 0.6 cc (1.8%). ISA causes the delivered dose in prostate seed implant brachytherapy to be lower than the dose calculated by TG-43U1. MC simulation of phantom seed arrangements show that dose at a point can be reduced by up to 18% and this has been validated using a MOSFET dosimeter. Clinical simulations show that ISA reduces DVH parameter values, but the reduction is less for thinner seeds.
Bouhrara, Mustapha; Spencer, Richard G.
2015-01-01
Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810
Seasonality of the Mindanao Current/Undercurrent System
NASA Astrophysics Data System (ADS)
Ren, Qiuping; Li, Yuanlong; Wang, Fan; Song, Lina; Liu, Chuanyu; Zhai, Fangguo
2018-02-01
Seasonality of the Mindanao Current (MC)/Undercurrent (MUC) system is investigated using moored acoustic Doppler current profiler (ADCP) measurements off Mindanao (8°N, 127.05°E) and ocean model simulations. The mooring observation during December 2010 to August 2014 revealed that the surface-layer MC between 50-150 m is dominated by annual-period variation and tends to be stronger in spring (boreal) and weaker in fall. Prominent semiannual variations were detected below 150 m. The lower MC between 150 and 400 m is stronger in spring and fall and weaker in summer and winter, while the northward MUC below 400 m emerges in summer and winter and disappears in spring and fall. In-phase and out-of-phase current anomalies above and below 150 m were observed alternatively. These variations are faithfully reproduced by an eddy-resolving ocean model simulation (OFES). Further analysis demonstrates that seasonal variation of the MC is a component of large-scale upper-ocean circulation gyre, while current variations in the MUC layer are confined near the western boundary and featured by shorter-scale (200-400 km) structures. Most of the MC variations and approximately half of the MUC variations can be explained by the first and second baroclinic modes and caused by local wind forcing of the western Pacific. Semiannual surface wind variability and superimposition of the two baroclinic modes jointly give rise to the enhanced subsurface semiannual variations. The pronounced mesoscale eddy variability in the MUC layer may also contribute to the seasonality of the MUC through eddy-current interaction.
2011-01-01
Background Out-of-hospital endotracheal intubation performed by paramedics using the Macintosh blade for direct laryngoscopy is associated with a high incidence of complications. The novel technique of video laryngoscopy has been shown to improve glottic view and intubation success in the operating room. The aim of this study was to compare glottic view, time of intubation and success rate of the McGrath® Series 5 and GlideScope® Ranger video laryngoscopes with the Macintosh laryngoscope by paramedics. Methods Thirty paramedics performed six intubations in a randomised order with all three laryngoscopes in an airway simulator with a normal airway. Subsequently, every participant performed one intubation attempt with each device in the same manikin with simulated cervical spine rigidity using a cervical collar. Glottic view, time until visualisation of the glottis and time until first ventilation were evaluated. Results Time until first ventilation was equivalent after three intubations in the first scenario. In the scenario with decreased cervical motion, the time until first ventilation was longer using the McGrath® compared to the GlideScope® and AMacintosh (p < 0.01). The success rate for endotracheal intubation was similar for all three devices. Glottic view was only improved using the McGrath® device (p < 0.001) compared to using the Macintosh blade. Conclusions The learning curve for video laryngoscopy in paramedics was steep in this study. However, these data do not support prehospital use of the McGrath® and GlideScope® devices by paramedics. PMID:21241469
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecover, J. D.; Chittenden, J. P.
A critical limitation of magnetically imploded systems such as magnetized liner inertial fusion (MagLIF) [Slutz et al., Phys. Plasmas 17, 056303 (2010)] is the magneto-Rayleigh-Taylor (MRT) instability which primarily disrupts the outer surface of the liner. MagLIF-relevant experiments have showed large amplitude multi-mode MRT instability growth growing from surface roughness [McBride et al., Phys. Rev. Lett. 109, 135004 (2012)], which is only reproduced by 3D simulations using our MHD code Gorgon when an artificially azimuthally correlated initialisation is added. We have shown that the missing azimuthal correlation could be provided by a combination of the electro-thermal instability (ETI) and anmore » “electro-choric” instability (ECI); describing, respectively, the tendency of current to correlate azimuthally early in time due to temperature dependent Ohmic heating; and an amplification of the ETI driven by density dependent resistivity around vapourisation. We developed and implemented a material strength model in Gorgon to improve simulation of the solid phase of liner implosions which, when applied to simulations exhibiting the ETI and ECI, gave a significant increase in wavelength and amplitude. Full circumference simulations of the MRT instability provided a significant improvement on previous randomly initialised results and approached agreement with experiment.« less
Molecular simulation study of cavity-generated instabilities in the superheated Lennard-Jones liquid
NASA Astrophysics Data System (ADS)
Torabi, Korosh; Corti, David S.
2010-10-01
Previous equilibrium-based density-functional theory (DFT) analyses of cavity formation in the pure component superheated Lennard-Jones (LJ) liquid [S. Punnathanam and D. S. Corti, J. Chem. Phys. 119, 10224 (2003); M. J. Uline and D. S. Corti, Phys. Rev. Lett. 99, 076102 (2007)] revealed that a thermodynamic limit of stability appears in which no liquidlike density profile can develop for cavity radii greater than some critical size (being a function of temperature and bulk density). The existence of these stability limits was also verified using isothermal-isobaric Monte Carlo (MC) simulations. To test the possible relevance of these limits of stability to a dynamically evolving system, one that may be important for homogeneous bubble nucleation, we perform isothermal-isobaric molecular dynamics (MD) simulations in which cavities of different sizes are placed within the superheated LJ liquid. When the impermeable boundary utilized to generate a cavity is removed, the MD simulations show that the cavity collapses and the overall density of the system remains liquidlike, i.e., the system is stable, when the initial cavity radius is below some certain value. On the other hand, when the initial radius is large enough, the cavity expands and the overall density of the system rapidly decreases toward vaporlike densities, i.e., the system is unstable. Unlike the DFT predictions, however, the transition between stability and instability is not infinitely sharp. The fraction of initial configurations that generate an instability (or a phase separation) increases from zero to unity as the initial cavity radius increases over a relatively narrow range of values, which spans the predicted stability limit obtained from equilibrium MC simulations. The simulation results presented here provide initial evidence that the equilibrium-based stability limits predicted in the previous DFT and MC simulation studies may play some role, yet to be fully determined, in the homogeneous nucleation and growth of embryos within metastable fluids.
Characterization of Compton-scatter imaging with an analytical simulation method
Jones, Kevin C; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V; Chu, James C H
2018-01-01
By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140–220 keV, and 40–50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min−1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images. PMID:29243663
Characterization of Compton-scatter imaging with an analytical simulation method
NASA Astrophysics Data System (ADS)
Jones, Kevin C.; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V.; Chu, James C. H.
2018-01-01
By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140-220 keV, and 40-50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min-1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images.
SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Y; Tian, Z; Jiang, S
Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized tomore » define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged particle transport in this geometry are currently in progress.« less
1969-02-24
S69-19858 (December 1968) --- Two members of the Apollo 9 prime crew participate in simulation training in the Apollo Lunar Module Mission Simulator (LMMS) at the Kennedy Space Center (KSC). On the left is astronaut James A. McDivitt, commander; and on the right is astronaut Russell L. Schweickart, lunar module pilot.
USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
Crew Training - Apollo 9 - KSC
1969-02-17
S69-19983 (17 Feb. 1969) --- The Apollo 9 crew is shown suited up for a simulated flight in the Apollo Mission Simulator at the Kennedy Space Center (KSC). Left to right are astronauts James A. McDivitt, commander; David R. Scott, command module pilot; and Russell L. Schweickart, lunar module pilot.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hristov, D; Schlosser, J; Bazalova, M
2014-06-01
Purpose: To quantify the effect of ultrasound (US) probe beam attenuation for radiation therapy delivered under real-time US image guidance by means of Monte Carlo (MC) simulations. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their CT images in the EGSnrc BEAMnrc and DOSXYZnrc codes. Due to the metal parts, the probes were scanned in a Tomotherapy machine with a 3.5 MV beam. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2–8.0 g/cm{sup 3}.more » Beam attenuation due to the probes was measured in a solid water phantom for a 6 MV and 15 MV 15x15 cm{sup 2} beam delivered on a Varian Trilogy linear accelerator. The dose was measured with the PTW-729 ionization chamber array at two depths and compared to MC simulations. The extreme case beam attenuation expected in robotic US image guided radiotherapy for probes in upright position was quantified by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities were 4.6 and 4.2 g/cm{sup 3} in the C5-2 and X6-1 probe, respectively. Gamma analysis of the simulated and measured doses revealed that over 98% of measurement points passed the 3%/3mm criteria for both probes and measurement depths. The extreme attenuation for probes in upright position was found to be 25% and 31% for the C5-2 and X6-1 probe, respectively, for both 6 and 15 MV beams at 10 cm depth. Conclusion: MC models of two US probes used for real-time image guidance during radiotherapy have been built. As a Result, radiotherapy treatment planning with the imaging probes in place can now be performed. J Schlosser is an employee of SoniTrack Systems, Inc. D Hristov has financial interest in SoniTrack Systems, Inc.« less
Global Combat Support System-Marine Corps Logistics Chain Management Increment 1 (GCSS-MC LCM Inc 1)
2016-03-01
Production Document DAE - Defense Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD ...Jul 2004 Milestone B Jun 2007 Jun 2007 Milestone C May 2010 May 2010 FDD Sep 2014 Mar 2015 FD Dec 2015 Dec 2015 GCSS-MC LCM Inc 1 2016 MAR
A Comparison of the McCarthy Scales of Children's Abilities and the WISC-R.
ERIC Educational Resources Information Center
Goh, David S.; Youngquist, James
1979-01-01
The study involving 40 learning disabled children (6-8 years old) investigated the relationships between the various indexes of the McCarthy Scales of Children's Abilities (MSCA) and the scales of the Wechsler Intelligence Scale for Children-Revised (WISC-R), and the comparability between the MSCA General Cognitive Index and the WISC-R Full Scale…
TiOx deposited by magnetron sputtering: a joint modelling and experimental study
NASA Astrophysics Data System (ADS)
Tonneau, R.; Moskovkin, P.; Pflug, A.; Lucas, S.
2018-05-01
This paper presents a 3D multiscale simulation approach to model magnetron reactive sputter deposition of TiOx⩽2 at various O2 inlets and its validation against experimental results. The simulation first involves the transport of sputtered material in a vacuum chamber by means of a three-dimensional direct simulation Monte Carlo (DSMC) technique. Second, the film growth at different positions on a 3D substrate is simulated using a kinetic Monte Carlo (kMC) method. When simulating the transport of species in the chamber, wall chemistry reactions are taken into account in order to get the proper content of the reactive species in the volume. Angular and energy distributions of particles are extracted from DSMC and used for film growth modelling by kMC. Along with the simulation, experimental deposition of TiOx coatings on silicon samples placed at different positions on a curved sample holder was performed. The experimental results are in agreement with the simulated ones. For a given coater, the plasma phase hysteresis behaviour, film composition and film morphology are predicted. The used methodology can be applied to any coater and any films. This paves the way to the elaboration of a virtual coater allowing a user to predict composition and morphology of films deposited in silico.
Kalantzis, Georgios; Tachibana, Hidenobu
2014-01-01
For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Room scatter effects in Total Skin Electron Irradiation: Monte Carlo simulation study.
Nevelsky, Alexander; Borzov, Egor; Daniel, Shahar; Bar-Deroma, Raquel
2017-01-01
Total Skin Electron Irradiation (TSEI) is a complex technique which usually involves the use of large electron fields and the dual-field approach. In this situation, many electrons scattered from the treatment room floor are produced. However, no investigations of the effect of scattered electrons in TSEI treatments have been reported. The purpose of this work was to study the contribution of floor scattered electrons to skin dose during TSEI treatment using Monte Carlo (MC) simulations. All MC simulations were performed with the EGSnrc code. Influence of beam energy, dual-field angle, and floor material on the contribution of floor scatter was investigated. Spectrum of the scattered electrons was calculated. Measurements of dose profile were performed in order to verify MC calculations. Floor scatter dependency on the floor material was observed (at 20 cm from the floor, scatter contribution was about 21%, 18%, 15%, and 12% for iron, concrete, PVC, and water, respectively). Although total dose profiles exhibited slight variation as functions of beam energy and dual-field angle, no dependence of the floor scatter contribution on the beam energy or dual-field angle was found. The spectrum of the scattered electrons was almost uniform between a few hundred KeV to 4 MeV, and then decreased linearly to 6 MeV. For the TSEI technique, dose contribution due to the electrons scattered from the room floor may be clinically significant and should be taken into account during design and commissioning phases. MC calculations can be used for this task. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsukamoto, S.; Arakawa, Y.; Bell, G. R.
2007-04-10
Dynamic images of InAs quantum dots (QDs) formation are obtained using a unique scanning tunneling microscope (STM) placed within the growth chamber. These images are interpreted with the aid of kinetic Monte Carlo (kMC) simulations of the QD nucleation process. Alloy fluctuations in the InGaAs wetting layer prior to QD formation assist in the nucleation of stable InAs islands containing tens of atoms which grow extremely rapidly to form QDs. Furthermore, not all deposited In is initially incorporated into the lattice, providing a large supply of material to rapidly form QDs at the critical thickness.
Instability timescale for the inclination instability in the solar system
NASA Astrophysics Data System (ADS)
Zderic, Alexander; Madigan, Ann-Marie; Fleisig, Jacob
2018-04-01
The gravitational influence of small bodies is often neglected in the study of solar system dynamics. However, this is not always an appropriate assumption. For example, mutual secular torques between low mass particles on eccentric orbits can result in a self-gravity instability (`inclination instability'; Madigan & McCourt 2016). During the instability, inclinations increase exponentially, eccentricities decrease (detachment), and orbits cluster in argument of perihelion. In the solar system, the orbits of the most distant objects show all three of these characteristics (high inclination: Volk & Malhotra (2017), detachment: Delsanti & Jewitt (2006), and argument of perihelion clustering: Trujillo & Sheppard (2014)). The inclination instability is a natural explanation for these phenomena.Unfortunately, full N-body simulations of the solar system are unfeasible (N ≈ O(1012)), and the behavior of the instability depends on N, prohibiting the direct application of lower N simulations. Here we present the instability timescale's functional dependence on N, allowing us to extrapolate our simulation results to that appropriate for the solar system. We show that ~5 MEarth of small icy bodies in the Sedna region is sufficient for the inclination instability to occur in the outer solar system.
NASA Technical Reports Server (NTRS)
Bentz, Daniel N.; Betush, William; Jackson, Kenneth A.
2003-01-01
In this paper we report on two related topics: Kinetic Monte Carlo simulations of the steady state growth of rod eutectics from the melt, and a study of the surface roughness of binary alloys. We have implemented a three dimensional kinetic Monte Carlo (kMC) simulation with diffusion by pair exchange only in the liquid phase. Entropies of fusion are first chosen to fit the surface roughness of the pure materials, and the bond energies are derived from the equilibrium phase diagram, by treating the solid and liquid as regular and ideal solutions respectively. A simple cubic lattice oriented in the {100} direction is used. Growth of the rods is initiated from columns of pure B material embedded in an A matrix, arranged in a close packed array with semi-periodic boundary conditions. The simulation cells typically have dimensions of 50 by 87 by 200 unit cells. Steady state growth is compliant with the Jackson-Hunt model. In the kMC simulations, using the spin-one Ising model, growth of each phase is faceted or nonfaceted phases depending on the entropy of fusion. There have been many studies of the surface roughening transition in single component systems, but none for binary alloy systems. The location of the surface roughening transition for the phases of a eutectic alloy determines whether the eutectic morphology will be regular or irregular. We have conducted a study of surface roughness on the spin-one Ising Model with diffusion using kMC. The surface roughness was found to scale with the melting temperature of the alloy as given by the liquidus line on the equilibrium phase diagram. The density of missing lateral bonds at the surface was used as a measure of surface roughness.
Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-01-01
Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061
Validation of an In-Water, Tower-Shading Correction Scheme
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Doyle, John P.; Zibordi, Giuseppe; vanderLinde, Dirk
2003-01-01
Large offshore structures used for the deployment of optical instruments can significantly perturb the intensity of the light field surrounding the optical measurement point, where different portions of the visible spectrum are subject to different shadowing effects. These effects degrade the quality of the acquired optical data and can reduce the accuracy of several derived quantities, such as those obtained by applying bio-optical algorithms directly to the shadow-perturbed data. As a result, optical remote sensing calibration and validation studies can be impaired if shadowing artifacts are not fully accounted for. In this work, the general in-water shadowing problem is examined for a particular case study. Backward Monte Carlo (MC) radiative transfer computations- performed in a vertically stratified, horizontally inhomogeneous, and realistic ocean-atmosphere system are shown to accurately simulate the shadow-induced relative percent errors affecting the radiance and irradiance data profiles acquired close to an oceanographic tower. Multiparameter optical data processing has provided adequate representation of experimental uncertainties allowing consistent comparison with simulations. The more detailed simulations at the subsurface depth appear to be essentially equivalent to those obtained assuming a simplified ocean-atmosphere system, except in highly stratified waters. MC computations performed in the simplified system can be assumed, therefore, to accurately simulate the optical measurements conducted under more complex sampling conditions (i.e., within waters presenting moderate stratification at most). A previously reported correction scheme, based on the simplified MC simulations, and developed for subsurface shadow-removal processing of in-water optical data taken close to the investigated oceanographic tower, is then validated adequately under most experimental conditions. It appears feasible to generalize the present tower-specific approach to solve other optical sensor shadowing problems pertaining to differently shaped deployment platforms, and also including surrounding structures and instrument casings.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
A fast and complete GEANT4 and ROOT Object-Oriented Toolkit: GROOT
NASA Astrophysics Data System (ADS)
Lattuada, D.; Balabanski, D. L.; Chesnevskaya, S.; Costa, M.; Crucillà, V.; Guardo, G. L.; La Cognata, M.; Matei, C.; Pizzone, R. G.; Romano, S.; Spitaleri, C.; Tumino, A.; Xu, Y.
2018-01-01
Present and future gamma-beam facilities represent a great opportunity to validate and evaluate the cross-sections of many photonuclear reactions at near-threshold energies. Monte Carlo (MC) simulations are very important to evaluate the reaction rates and to maximize the detection efficiency but, unfortunately, they can be very cputime-consuming and in some cases very hard to reproduce, especially when exploring near-threshold cross-section. We developed a software that makes use of the validated tracking GEANT4 libraries and the n-body event generator of ROOT in order to provide a fast, realiable and complete MC tool to be used for nuclear physics experiments. This tool is indeed intended to be used for photonuclear reactions at γ-beam facilities with ELISSA (ELI Silicon Strip Array), a new detector array under development at the Extreme Light Infrastructure - Nuclear Physics (ELI-NP). We discuss the results of MC simulations performed to evaluate the effects of the electromagnetic induced background, of the straggling due to the target thickness and of the resolution of the silicon detectors.
Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.
Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut
2007-04-20
We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.
Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J
2013-04-21
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
NASA Astrophysics Data System (ADS)
Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.
2013-04-01
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
Tessonnier, Thomas; Marcelos, Tiago; Mairani, Andrea; Brons, Stephan; Parodi, Katia
2015-01-01
In the field of radiation therapy, accurate and robust dose calculation is required. For this purpose, precise modeling of the irradiation system and reliable computational platforms are needed. At the Heidelberg Ion Therapy Center (HIT), the beamline has been already modeled in the FLUKA Monte Carlo (MC) code. However, this model was kept confidential for disclosure reasons and was not available for any external team. The main goal of this study was to create efficiently phase space (PS) files for proton and carbon ion beams, for all energies and foci available at HIT. PSs are representing the characteristics of each particle recorded (charge, mass, energy, coordinates, direction cosines, generation) at a certain position along the beam path. In order to achieve this goal, keeping a reasonable data size but maintaining the requested accuracy for the calculation, we developed a new approach of beam PS generation with the MC code FLUKA. The generated PSs were obtained using an infinitely narrow beam and recording the desired quantities after the last element of the beamline, with a discrimination of primaries or secondaries. In this way, a unique PS can be used for each energy to accommodate the different foci by combining the narrow-beam scenario with a random sampling of its theoretical Gaussian beam in vacuum. PS can also reproduce the different patterns from the delivery system, when properly combined with the beam scanning information. MC simulations using PS have been compared to simulations, including the full beamline geometry and have been found in very good agreement for several cases (depth dose distributions, lateral dose profiles), with relative dose differences below 0.5%. This approach has also been compared with measured data of ion beams with different energies and foci, resulting in a very satisfactory agreement. Hence, the proposed approach was able to fulfill the different requirements and has demonstrated its capability for application to clinical treatment fields. It also offers a powerful tool to perform investigations on the contribution of primary and secondary particles produced in the beamline. These PSs are already made available to external teams upon request, to support interpretation of their measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagaoka, Masataka; Core Research for Evolutional Science and Technology; ESICB, Kyoto University, Kyodai Katsura, Nishikyo-ku, Kyoto 615-8520
A new efficient hybrid Monte Carlo (MC)/molecular dynamics (MD) reaction method with a rare event-driving mechanism is introduced as a practical ‘atomistic’ molecular simulation of large-scale chemically reactive systems. Starting its demonstrative application to the racemization reaction of (R)-2-chlorobutane in N,N-dimethylformamide solution, several other applications are shown from the practical viewpoint of molecular controlling of complex chemical reactions, stereochemistry and aggregate structures. Finally, I would like to mention the future applications of the hybrid MC/MD reaction method.
1985-11-18
Greenberg and K. Sakallah at Digital Equipment Corporation, and C-F. Chen, L Nagel, and P. ,. Subrahmanyam at AT&T Bell Laboratories, both for providing...Circuit Theory McGraw-Hill, 1969. [37] R. Courant and D. Hilbert , Partial Differential Equations, Vol. 2 of Methods of Mathematical Physics...McGraw-Hill, N.Y., 1965. Page 161 [44) R. Courant and D. Hilbert , Partial Differential Equations, Vol. 2 of Methods of Mathematical Physics
Elsner, Jonathan J; Shemesh, Maoz; Shefy-Peleg, Adaya; Gabet, Yankel; Zylberberg, Eyal; Linder-Ganz, Eran
2015-09-01
A synthetic meniscus implant was recently developed for the treatment of patients with mild to moderate osteoarthritis with knee pain associated with medial joint overload. The implant is distinctively different from most orthopedic implants in its pliable construction, and non-anchored design, which enables implantation through a mini-arthrotomy without disruption to the bone, cartilage, and ligaments. Due to these features, it is important to show that the material and design can withstand knee joint conditions. This study evaluated the long-term performance of this device by simulating loading for a total of 5 million gait cycles (Mc), corresponding to approximately five years of service in-vivo. All five implants remained in good condition and did not dislodge from the joint space during the simulation. Mild abrasion was detected by electron microscopy, but µ-CT scans of the implants confirmed that the damage was confined to the superficial surfaces. The average gravimetric wear rate was 14.5 mg/Mc, whereas volumetric changes in reconstructed µ-CT scans point to an average wear rate of 15.76 mm(3)/Mc (18.8 mg/Mc). Particles isolated from the lubricant had average diameter of 15 µm. The wear performance of this polycarbonate-urethane meniscus implant concept under ISO-14243 loading conditions is encouraging. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Lin, Guang
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less
Wang, Juan; Nishikawa, Robert M; Yang, Yongyi
2017-07-01
Mammograms acquired with full-field digital mammography (FFDM) systems are provided in both "for-processing'' and "for-presentation'' image formats. For-presentation images are traditionally intended for visual assessment by the radiologists. In this study, we investigate the feasibility of using for-presentation images in computerized analysis and diagnosis of microcalcification (MC) lesions. We make use of a set of 188 matched mammogram image pairs of MC lesions from 95 cases (biopsy proven), in which both for-presentation and for-processing images are provided for each lesion. We then analyze and characterize the MC lesions from for-presentation images and compare them with their counterparts in for-processing images. Specifically, we consider three important aspects in computer-aided diagnosis (CAD) of MC lesions. First, we quantify each MC lesion with a set of 10 image features of clustered MCs and 12 textural features of the lesion area. Second, we assess the detectability of individual MCs in each lesion from the for-presentation images by a commonly used difference-of-Gaussians (DoG) detector. Finally, we study the diagnostic accuracy in discriminating between benign and malignant MC lesions from the for-presentation images by a pretrained support vector machine (SVM) classifier. To accommodate the underlying background suppression and image enhancement in for-presentation images, a normalization procedure is applied. The quantitative image features of MC lesions from for-presentation images are highly consistent with that from for-processing images. The values of Pearson's correlation coefficient between features from the two formats range from 0.824 to 0.961 for the 10 MC image features, and from 0.871 to 0.963 for the 12 textural features. In detection of individual MCs, the FROC curve from for-presentation is similar to that from for-processing. In particular, at sensitivity level of 80%, the average number of false-positives (FPs) per image region is 9.55 for both for-presentation and for-processing images. Finally, for classifying MC lesions as malignant or benign, the area under the ROC curve is 0.769 in for-presentation, compared to 0.761 in for-processing (P = 0.436). The quantitative results demonstrate that MC lesions in for-presentation images are highly consistent with that in for-processing images in terms of image features, detectability of individual MCs, and classification accuracy between malignant and benign lesions. These results indicate that for-presentation images can be compatible with for-processing images for use in CAD algorithms for MC lesions. © 2017 American Association of Physicists in Medicine.
Kapanen, Mika K.; Hyödynmaa, Simo J.; Wigren, Tuija K.; Pitkänen, Maunu A.
2014-01-01
The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms — pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB) — implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC‐calculated dose distributions were compared to corresponding AXB‐calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose‐volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3 mm and 2%/2 mm were applied. The AXB‐calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm threshold criteria showed larger discrepancies. The TPS algorithm comparison results showed large dose discrepancies in the PTV mean dose (D50%), nearly 60%, for the PBC algorithm, and differences of nearly 20% for the AAA, occurring also in the small PTV size range. This work suggests the application of independent plan verification, when the AAA or the AXB algorithm are utilized in lung SBRT having PTVs smaller than 20‐25 cc. The calculated data from this study can be used in converting the SBRT protocols based on type ‘a’ and/or type ‘b’ algorithms for the most recent generation type ‘c’ algorithms, such as the AXB algorithm. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.K‐, 87.55.kd, 87.55.Qr PMID:24710454
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safigholi, H; Soliman, A; Song, W
Purpose: Brachytherapy treatment planning systems based on TG-43 protocol calculate the dose in water and neglects the heterogeneity effect of seeds in multi-seed implant brachytherapy. In this research, the accuracy of a novel analytical model that we propose for the inter-seed attenuation effect (ISA) for 103-Pd seed model is evaluated. Methods: In the analytical model, dose perturbation due to the ISA effect for each seed in an LDR multi-seed implant for 103-Pd is calculated by assuming that the seed of interest is active and the other surrounding seeds are inactive. The cumulative dosimetric effect of all seeds is then summedmore » using the superposition principle. The model is based on pre Monte Carlo (MC) simulated 3D kernels of the dose perturbations caused by the ISA effect. The cumulative ISA effect due to multiple surrounding seeds is obtained by a simple multiplication of the individual ISA effect by each seed, the effect of which is determined by the distance from the seed of interest. This novel algorithm is then compared with full MC water-based simulations (FMCW). Results: The results show that the dose perturbation model we propose is in excellent agreement with the FMCW values for a case with three seeds separated by 1 cm. The average difference of the model and the FMCW simulations was less than 8%±2%. Conclusion: Using the proposed novel analytical ISA effect model, one could expedite the corrections due to the ISA dose perturbation effects during permanent seed 103-Pd brachytherapy planning with minimal increase in time since the model is based on multiplications and superposition. This model can be applied, in principle, to any other brachytherapy seeds. Further work is necessary to validate this model on a more complicated geometry as well.« less
NASA Astrophysics Data System (ADS)
Truong, Thanh N.; Stefanovich, Eugene V.
1997-05-01
We present a study of micro-solvation of Cl anion by water clusters of the size up to seven molecules using a perturbative Monte Carlo approach with a hybrid HF/MM potential. In this approach, a perturbation theory was used to avoid performing full SCF calculations at every Monte Carlo step. In this study, the anion is treated quantum mechanically at the HF/6-31G ∗ level of theory while interactions between solvent waters are presented by the TIP3P potential force field. Analysis on the solvent induced dipole moment of the ion indicates that the Cl anion resides most of the time on the surface of the clusters. Accuracy of the perturbative MC approach is also discussed.
Photoluminescence Imaging and LBIC Characterization of Defects in mc-Si Solar Cells
NASA Astrophysics Data System (ADS)
Sánchez, L. A.; Moretón, A.; Guada, M.; Rodríguez-Conde, S.; Martínez, O.; González, M. A.; Jiménez, J.
2018-05-01
Today's photovoltaic market is dominated by multicrystalline silicon (mc-Si) based solar cells with around 70% of worldwide production. In order to improve the quality of the Si material, a proper characterization of the electrical activity in mc-Si solar cells is essential. A full-wafer characterization technique such as photoluminescence imaging (PLi) provides a fast inspection of the wafer defects, though at the expense of the spatial resolution. On the other hand, a study of the defects at a microscopic scale can be achieved through the light-beam induced current technique. The combination of these macroscopic and microscopic resolution techniques allows a detailed study of the electrical activity of defects in mc-Si solar cells. In this work, upgraded metallurgical-grade Si solar cells are studied using these two techniques.
Saraçoğlu, Ayten; Bezen, Olgaç; Şengül, Türker; Uğur, Egin Hüsnü; Şener, Sibel; Yüzer, Fisun
2015-08-01
Interruption of chest compressions should be minimized because of its negative effects on survival. This randomized, controlled, cross-over study aimed to analyze the effectiveness of Macintosh, Miller, McCoy and McGrath laryngoscopes during with or without chest compressions in the scope of a simulated cardiopulmonary resuscitation scenario. The time required for successful tracheal intubation, number of attempts, dental trauma severity and the need for optimization manoeuvres were recorded during cardiopulmonary resuscitation with and without chest compressions. The experience with computer games during the last 10 years were asked to the participants and recorded. McCoy laryngoscope yielded the shortest time for successful tracheal intubation both in the presence of and without chest compressions. During the use of McCoy laryngoscopes, fewer tracheal intubation attempts, lower incidence of dental trauma and lower visual analogue scale scores on the ease of intubation were recorded. Participants who are experienced computer game players using Macintosh, McCoy and McGrath achieved successful tracheal intubation in a significantly shorter time during resuscitation without chest compressions. Dental trauma incidence and number of tracheal intubation attempts did not show any significant difference between the four laryngoscopes being related to the rate of playing computer games. McGrath video laryngoscopes do not appear to have advantages over direct laryngoscopes for securing a smooth and successful tracheal intubation during rhythmic chest compressions. We believe that as McCoy laryngoscope provided tracheal intubation in a shorter time and with fewer attempts, this laryngoscope may increase the success rate of resuscitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Shoudi; He, Jiansen; Yang, Liping
The impact of an overtaking fast shock on a magnetic cloud (MC) is a pivotal process in CME–CME (CME: coronal mass ejection) interactions and CME–SIR (SIR: stream interaction region) interactions. MC with a strong and rotating magnetic field is usually deemed a crucial part of CMEs. To study the impact of a fast shock on an MC, we perform a 2.5 dimensional numerical magnetohydrodynamic simulation. Two cases are run in this study: without and with impact by fast shock. In the former case, the MC expands gradually from its initial state and drives a relatively slow magnetic reconnection with themore » ambient magnetic field. Analyses of forces near the core of the MC as a whole body indicates that the solar gravity is quite small compared to the Lorentz force and the pressure gradient force. In the second run, a fast shock propagates, relative to the background plasma, at a speed twice that of the perpendicular fast magnetosonic speed, catches up with and takes over the MC. Due to the penetration of the fast shock, the MC is highly compressed and heated, with the temperature growth rate enhanced by a factor of about 10 and the velocity increased to about half of the shock speed. The magnetic reconnection with ambient magnetic field is also sped up by a factor of two to four in reconnection rate as a result of the enhanced density of the current sheet, which is squeezed by the forward motion of the shocked MC.« less
NASA Astrophysics Data System (ADS)
Krishnan, A.; Mou, X. J.
2015-12-01
Lake Erie, the smallest and warmest lake among the Laurentian Great Lakes, is known for its problem of eutrophication and frequent occurrence of harmful cyanobacterial blooms (CyanoHABs). One major harmful effect of CyanoHABs is the production of cyanotoxins, especially microcystins. Microcystins (MC) are a group of hepatotoxins and the predominant variant of them is MC-LR. Field measurements and lab experiments indicate that MC degradation in Lake Erie is mainly carried out by indigenous bacteria. However, our knowledge on taxa involved in this process is very limited. This study aimed to fill this knowledge gap using a culture-dependent approach. Water and surface sediment samples were collected from Lake Erie in 2014 and 2015 and enriched with MC-LR. Cells were plated on a number of culturing media. The obtained pure bacterial cultures were screened for MC degrading abilities by MT2 BIO-LOG assays and by growing cells in liquid media containing MC-LR as the sole carbon source. In the latter experiment, MC concentrations were measured using HPLC. Isolates showing positive MC degradation activities in the screening steps were designated MC+ bacteria and characterized based on their phenotypic properties, including colony pigmentation, elevation, opacity, margin, gram nature and motility. The taxonomic identity of MC+ bacteria was determined by 16S rRNA gene full-length DNA sequencing. The presence of mlrA, a gene encoding MC cleavage pathway, was detected by PCR. Our culturing efforts obtained 520 pure cultures; 44 of them were identified as MC+. These MC+ isolates showed diversity in taxonomic identities and differed in their morphology, gram nature, colony characteristics and motility. PCR amplification of mlrA gene yield negative results for all MC+ isolates, indicating that the primers that were used may not be ubiquitous enough to cover the heterogeneity of mlrA genes or, more likely, alternative degradative genes/pathways were employed by Lake Erie bacteria. The MC+ isolates can serve as models for future identification of MC degradation pathway and used to develop or augment biofilters for effective treatment of MC contaminated water. Key Words: CyanoHAB, microcystins, degradation
A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.
Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei
2014-12-16
The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately calculate the dose distribution in lung cancer and can provide a notably effective tool for benchmarking the performance of other dose calculation algorithms within patients.
NASA Astrophysics Data System (ADS)
Bootsma, Gregory J.
X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.
Cuong, Do Manh; Arasu, Mariadhas Valan; Jeon, Jin; Park, Yun Ji; Kwon, Soon-Jae; Al-Dhabi, Naif Abdullah; Park, Sang Un
2017-12-01
Carotenoids, found in the fruit and different organs of bitter melon ( Momordica charantia ), have attracted great attention for their potential health benefits in treating several major chronic diseases. Therefore, study related to the identification and quantification of the medically important carotenoid metabolites is highly important for the treatment of various disorderes. The present study involved in the identification and quantification of the various carotenoids present in the different organs of M. charantia and the identification of the genes responsible for the accumulation of the carotenoids with respect to the transcriptome levels were investigated. In this study, using the transcriptome database of bitter melon, a partial-length cDNA clone encoding geranylgeranyl pyrophosphate synthase ( McGGPPS2 ), and several full-length cDNA clones encoding geranylgeranyl pyrophosphate synthase ( McGGPPS1 ), zeta-carotene desaturase ( McZDS ), lycopene beta-cyclase ( McLCYB ), lycopene epsilon cyclases ( McLCYE1 and McLCYE2 ), beta-carotene hydroxylase ( McCHXB ), and zeaxanthin epoxidase ( McZEP ) were identified in bitter melon . The expression levels of the mRNAs encoding these eight putative biosynthetic enzymes, as well as the accumulation of lycopene, α-carotene, lutein, 13Z-β-carotene, E-β-carotene, 9Z-β-carotene, β-cryptoxanthin, zeaxanthin, antheraxanthin, and violaxanthin were investigated in different organs from M. charantia as well as in the four different stages of its fruit maturation. Transcripts were found to be constitutively expressed at high levels in the leaves where carotenoids were also found at the highest levels . Collectively, these results indicate that the putative McGGPPS2, McZDS, McLCYB, McLCYE1, McLCYE2, and McCHXB enzymes might be key factors in controlling carotenoid content in bitter melon . In conclusion, the over expression of the carotenoid biosynthetic genes from M. charantia crops to increase the yield of these medically important carotenoids.
Mode conversion in ICRF experiments on Alcator C-Mod
NASA Astrophysics Data System (ADS)
Lin, Y.; Wukitch, S. J.; Edlund, E.; Ennever, P.; Hubbard, A. E.; Porkolab, M.; Rice, J.; Wright, J.
2017-10-01
In recent three-ion species (majority D and H plus a trace level of 3He) ICRF heating experiment on Alcator C-Mod, double mode conversion on both sides of the 3He cyclotron resonance has been observed using the phase contrast imaging (PCI) system. The MC locations are used to estimate the species concentrations in the plasma. Simulation using TORIC shows that with the 3He level <1%, most RF power is absorbed by the 3He ions and the process can generate energetic 3He ions. In recent mode conversion flow drive experiment in D(3He) plasma at 8 T, MC waves were also monitored by PCI. The MC ion cyclotron wave (ICW) amplitude and wavenumber kR have been found to correlate with the flow drive force. The MC efficiency, wave-number k of the MC ICW and their dependence on plasma parameters like Te0 are shown to play important roles. Based on the experimental observation and numerical study of the dispersion solutions, a hypothesis of the flow drive mechanism has been proposed. Supported by USDoE awards DE-FC02-99ER54512.
Monowar, Muhammad Mostafa; Hassan, Mohammad Mehedi; Bajaber, Fuad; Al-Hussein, Musaed; Alamri, Atif
2012-01-01
The emergence of heterogeneous applications with diverse requirements for resource-constrained Wireless Body Area Networks (WBANs) poses significant challenges for provisioning Quality of Service (QoS) with multi-constraints (delay and reliability) while preserving energy efficiency. To address such challenges, this paper proposes McMAC, a MAC protocol with multi-constrained QoS provisioning for diverse traffic classes in WBANs. McMAC classifies traffic based on their multi-constrained QoS demands and introduces a novel superframe structure based on the “transmit-whenever-appropriate” principle, which allows diverse periods for diverse traffic classes according to their respective QoS requirements. Furthermore, a novel emergency packet handling mechanism is proposed to ensure packet delivery with the least possible delay and the highest reliability. McMAC is also modeled analytically, and extensive simulations were performed to evaluate its performance. The results reveal that McMAC achieves the desired delay and reliability guarantee according to the requirements of a particular traffic class while achieving energy efficiency. PMID:23202224
Michael J. Aspinwall; John S. King; Steven E. McKeand; Bronson P. Bullock
2012-01-01
Several decades of tree improvement operations have drastically increased loblolly pine plantation productivity in the southern U.S. (McKeand et al., 2003). This work has lead to the availability of a number of highly productive open-pollinated and full-sib families (McKeand et al., 2006). In addition, vegetative propagation (somatic embryogenesis) has also made it...
Constant-pH molecular dynamics using stochastic titration
NASA Astrophysics Data System (ADS)
Baptista, António M.; Teixeira, Vitor H.; Soares, Cláudio M.
2002-09-01
A new method is proposed for performing constant-pH molecular dynamics (MD) simulations, that is, MD simulations where pH is one of the external thermodynamic parameters, like the temperature or the pressure. The protonation state of each titrable site in the solute is allowed to change during a molecular mechanics (MM) MD simulation, the new states being obtained from a combination of continuum electrostatics (CE) calculations and Monte Carlo (MC) simulation of protonation equilibrium. The coupling between the MM/MD and CE/MC algorithms is done in a way that ensures a proper Markov chain, sampling from the intended semigrand canonical distribution. This stochastic titration method is applied to succinic acid, aimed at illustrating the method and examining the choice of its adjustable parameters. The complete titration of succinic acid, using constant-pH MD simulations at different pH values, gives a clear picture of the coupling between the trans/gauche isomerization and the protonation process, making it possible to reconcile some apparently contradictory results of previous studies. The present constant-pH MD method is shown to require a moderate increase of computational cost when compared to the usual MD method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S
2015-06-15
Purpose: To quantify the dosimetric variations of misaligned beams for a linear accelerator by using Monte Carlo (MC) simulations. Method and Materials: Misaligned beams of a Varian 21EX Clinac were simulated to estimate the dosimetric effects. All the linac head components for a 6 MV photon beam were implemented in BEAMnrc/EGSnrc system. For incident electron beam parameters, 6 MeV with 0.1 cm full-width-half-max Gaussian beam was used. A phase space file was obtained below the jaw per each misalignment condition of the incident electron beam: (1) The incident electron beams were tilted by 0.5, 1.0 and 1.5 degrees on themore » x-axis from the central axis. (2) The center of the incident electron beam was off-axially moved toward +x-axis by 0.1, 0.2, and 0.3 cm away from the central axis. Lateral profiles for each misaligned beam condition were acquired at dmax = 1.5 cm and 10 cm depth in a rectangular water phantom. Beam flatness and symmetry were calculated by using the lateral profile data. Results: The lateral profiles were found to be skewed opposite to the angle of the incident beam for the tilted beams. For the displaced beams, similar skewed lateral profiles were obtained with small shifts of penumbra on the +x-axis. The variations of beam flatness were 3.89–11.18% and 4.12–42.57% for the tilted beam and the translated beam, respectively. The beam symmetry was separately found to be 2.95 −9.93% and 2.55–38.06% separately. It was found that the percent increase of the flatness and the symmetry values are approximated 2 to 3% per 0.5 degree tilt or per 1 mm displacement. Conclusion: This study quantified the dosimetric effects of misaligned beams using MC simulations. The results would be useful to understand the magnitude of the dosimetric deviations for the misaligned beams.« less
Higher Harmonics in Heavy Ion Collisions
NASA Astrophysics Data System (ADS)
Jeon, Sangyong
2013-03-01
As the QGP expands and cools, it carries much information on its creation and evolution imprinted on the patterns of higher harmonic flow. In this proceeding we report on the progress in simulating and understanding the higher harmonics by the McGill group using the 3+1D event-by-event viscous hydrodynamics simulation suite named MUSIC.
A framework for simulating map error in ecosystem models
Sean P. Healey; Shawn P. Urbanski; Paul L. Patterson; Chris Garrard
2014-01-01
The temporal depth and spatial breadth of observations from platforms such as Landsat provide unique perspective on ecosystem dynamics, but the integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential map errors in broader...
NASA Astrophysics Data System (ADS)
Kredler, L.; Häußler, W.; Martin, N.; Böni, P.
The flux is still a major limiting factor in neutron research. For instruments being supplied by cold neutrons using neutron guides, both at present steady-state and at new spallation neutron sources, it is therefore important to optimize the instrumental setup and the neutron guidance. Optimization of neutron guide geometry and of the instrument itself can be performed by numerical ray-tracing simulations using existing open-access codes. In this paper, we discuss how such Monte Carlo simulations have been employed in order to plan improvements of the Neutron Resonant Spin Echo spectrometer RESEDA (FRM II, Germany) as well as the neutron guides before and within the instrument. The essential components have been represented with the help of the McStas ray-tracing package. The expected intensity has been tested by means of several virtual detectors, implemented in the simulation code. Comparison between simulations and preliminary measurements results shows good agreement and demonstrates the reliability of the numerical approach. These results will be taken into account in the planning of new components installed in the guide system.
SUPERNOVA DRIVING. II. COMPRESSIVE RATIO IN MOLECULAR-CLOUD TURBULENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Liubin; Padoan, Paolo; Haugbølle, Troels
2016-07-01
The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of themore » compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.« less
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Zhou, Shiqi; Lamperski, Stanisław; Zydorczak, Maria
2014-08-14
Monte Carlo (MC) simulation and classical density functional theory (DFT) results are reported for the structural and electrostatic properties of a planar electric double layer containing ions having highly asymmetric diameters or valencies under extreme concentration condition. In the applied DFT, for the excess free energy contribution due to the hard sphere repulsion, a recently elaborated extended form of the fundamental measure functional is used, and coupling of Coulombic and short range hard-sphere repulsion is described by a traditional second-order functional perturbation expansion approximation. Comparison between the MC and DFT results indicates that validity interval of the traditional DFT approximation expands to high ion valences running up to 3 and size asymmetry high up to diameter ratio of 4 whether the high valence ions or the large size ion are co- or counter-ions; and to a high bulk electrolyte concentration being close to the upper limit of the electrolyte mole concentration the MC simulation can deal with well. The DFT accuracy dependence on the ion parameters can be self-consistently explained using arguments of liquid state theory, and new EDL phenomena such as overscreening effect due to monovalent counter-ions, extreme layering effect of counter-ions, and appearance of a depletion layer with almost no counter- and co-ions are observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewkow, N. R.; Kharchenko, V.
2014-08-01
The precipitation of energetic neutral atoms, produced through charge exchange collisions between solar wind ions and thermal atmospheric gases, is investigated for the Martian atmosphere. Connections between parameters of precipitating fast ions and resulting escape fluxes, altitude-dependent energy distributions of fast atoms and their coefficients of reflection from the Mars atmosphere, are established using accurate cross sections in Monte Carlo (MC) simulations. Distributions of secondary hot (SH) atoms and molecules, induced by precipitating particles, have been obtained and applied for computations of the non-thermal escape fluxes. A new collisional database on accurate energy-angular-dependent cross sections, required for description of themore » energy-momentum transfer in collisions of precipitating particles and production of non-thermal atmospheric atoms and molecules, is reported with analytic fitting equations. Three-dimensional MC simulations with accurate energy-angular-dependent cross sections have been carried out to track large ensembles of energetic atoms in a time-dependent manner as they propagate into the Martian atmosphere and transfer their energy to the ambient atoms and molecules. Results of the MC simulations on the energy-deposition altitude profiles, reflection coefficients, and time-dependent atmospheric heating, obtained for the isotropic hard sphere and anisotropic quantum cross sections, are compared. Atmospheric heating rates, thermalization depths, altitude profiles of production rates, energy distributions of SH atoms and molecules, and induced escape fluxes have been determined.« less
Liu, Lei; Wade, Rebecca C; Heermann, Dieter W
2015-09-01
The conformational properties of unbound multi-Cys2 His2 (mC2H2) zinc finger proteins, in which zinc finger domains are connected by flexible linkers, are studied by a multiscale approach. Three methods on different length scales are utilized. First, atomic detail molecular dynamics simulations of one zinc finger and its adjacent flexible linker confirmed that the zinc finger is more rigid than the flexible linker. Second, the end-to-end distance distributions of mC2H2 zinc finger proteins are computed using an efficient atomistic pivoting algorithm, which only takes excluded volume interactions into consideration. The end-to-end distance distribution gradually changes its profile, from left-tailed to right-tailed, as the number of zinc fingers increases. This is explained by using a worm-like chain model. For proteins of a few zinc fingers, an effective bending constraint favors an extended conformation. Only for proteins containing more than nine zinc fingers, is a somewhat compacted conformation preferred. Third, a mesoscale model is modified to study both the local and the global conformational properties of multi-C2H2 zinc finger proteins. Simulations of the CCCTC-binding factor (CTCF), an important mC2H2 zinc finger protein for genome spatial organization, are presented. © 2015 Wiley Periodicals, Inc.
D. Bachelet; J. Lenihan; R. Neilson; R. Drapek; T. Kittel
2005-01-01
The dynamic global vegetation model MC1 was used to examine climate, fire, and ecosystems interactions in Alaska under historical (1922-1996) and future (1997-2100) climate conditions. Projections show that by the end of the 21st century, 75%-90% of the area simulated as tundra in 1922 is replaced by boreal and temperate forest. From 1922 to 1996, simulation results...
John B Kim; Erwan Monier; Brent Sohngen; G Stephen Pitts; Ray Drapek; James McFarland; Sara Ohrel; Jefferson Cole
2016-01-01
We analyze a set of simulations to assess the impact of climate change on global forests where MC2 dynamic global vegetation model (DGVM) was run with climate simulations from the MIT Integrated Global System Model-Community Atmosphere Model (IGSM-CAM) modeling framework. The core study relies on an ensemble of climate simulations under two emissions scenarios: a...
Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco
2014-01-01
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C
Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less
Monte Carlo-based QA for IMRT of head and neck cancers
NASA Astrophysics Data System (ADS)
Tang, F.; Sham, J.; Ma, C.-M.; Li, J.-S.
2007-06-01
It is well-known that the presence of large air cavity in a dense medium (or patient) introduces significant electronic disequilibrium when irradiated with megavoltage X-ray field. This condition may worsen by the possible use of tiny beamlets in intensity-modulated radiation therapy (IMRT). Commercial treatment planning systems (TPSs), in particular those based on the pencil-beam method, do not provide accurate dose computation for the lungs and other cavity-laden body sites such as the head and neck. In this paper we present the use of Monte Carlo (MC) technique for dose re-calculation of IMRT of head and neck cancers. In our clinic, a turn-key software system is set up for MC calculation and comparison with TPS-calculated treatment plans as part of the quality assurance (QA) programme for IMRT delivery. A set of 10 off-the-self PCs is employed as the MC calculation engine with treatment plan parameters imported from the TPS via a graphical user interface (GUI) which also provides a platform for launching remote MC simulation and subsequent dose comparison with the TPS. The TPS-segmented intensity maps are used as input for the simulation hence skipping the time-consuming simulation of the multi-leaf collimator (MLC). The primary objective of this approach is to assess the accuracy of the TPS calculations in the presence of air cavities in the head and neck whereas the accuracy of leaf segmentation is verified by fluence measurement using a fluoroscopic camera-based imaging device. This measurement can also validate the correct transfer of intensity maps to the record and verify system. Comparisons between TPS and MC calculations of 6 MV IMRT for typical head and neck treatments review regional consistency in dose distribution except at and around the sinuses where our pencil-beam-based TPS sometimes over-predicts the dose by up to 10%, depending on the size of the cavities. In addition, dose re-buildup of up to 4% is observed at the posterior nasopharyngeal mucosa for some treatments with heavily-weighted anterior fields.
EDITORIAL: International Workshop on Current Topics in Monte Carlo Treatment Planning
NASA Astrophysics Data System (ADS)
Verhaegen, Frank; Seuntjens, Jan
2005-03-01
The use of Monte Carlo particle transport simulations in radiotherapy was pioneered in the early nineteen-seventies, but it was not until the eighties that they gained recognition as an essential research tool for radiation dosimetry, health physics and later on for radiation therapy treatment planning. Since the mid-nineties, there has been a boom in the number of workers using MC techniques in radiotherapy, and the quantity of papers published on the subject. Research and applications of MC techniques in radiotherapy span a very wide range from fundamental studies of cross sections and development of particle transport algorithms, to clinical evaluation of treatment plans for a variety of radiotherapy modalities. The International Workshop on Current Topics in Monte Carlo Treatment Planning took place at Montreal General Hospital, which is part of McGill University, halfway up Mount Royal on Montreal Island. It was held from 3-5 May, 2004, right after the freezing winter has lost its grip on Canada. About 120 workers attended the Workshop, representing 18 countries. Most of the pioneers in the field were present but also a large group of young scientists. In a very full programme, 41 long papers were presented (of which 12 were invited) and 20 posters were on display during the whole meeting. The topics covered included the latest developments in MC algorithms, statistical issues, source modelling and MC treatment planning for photon, electron and proton treatments. The final day was entirely devoted to clinical implementation issues. Monte Carlo radiotherapy treatment planning has only now made a slow entrée in the clinical environment, taking considerably longer than envisaged ten years ago. Of the twenty-five papers in this dedicated special issue, about a quarter deal with this topic, with probably many more studies to follow in the near future. If anything, we hope the Workshop served as an accelerator for more clinical evaluation of MC applications. The remainder of the papers in this issue demonstrate that there is still plenty of work to be undertaken on other topics such as source modelling, calculation speed, data analysis, and development of user-friendly applications. We acknowledge the financial support of the National Cancer Institute of Canada, the Institute of Cancer Research of the Canadian Institutes of Health Research, the Research Grants Office and the Post Graduate Student Society of McGill University, and the Institute of Physics Publishing (IOPP). A final word of thanks goes out to all of those who contributed to the successful Workshop: our local medical physics students and staff, the many colleagues who acted as guest associate editors for the reviewing process, the IOPP staff, and the authors who generated new and exciting work.
NASA Astrophysics Data System (ADS)
Baptista, M.; Teles, P.; Cardoso, G.; Vaz, P.
2014-11-01
Over the last decade, there was a substantial increase in the number of interventional cardiology procedures worldwide, and the corresponding ionizing radiation doses for both the medical staff and patients became a subject of concern. Interventional procedures in cardiology are normally very complex, resulting in long exposure times. Also, these interventions require the operator to work near the patient and, consequently, close to the primary X-ray beam. Moreover, due to the scattered radiation from the patient and the equipment, the medical staff is also exposed to a non-uniform radiation field that can lead to a significant exposure of sensitive body organs and tissues, such as the eye lens, the thyroid and the extremities. In order to better understand the spatial variation of the dose and dose rate distributions during an interventional cardiology procedure, the dose distribution around a C-arm fluoroscopic system, in operation in a cardiac cath lab at Portuguese Hospital, was estimated using both Monte Carlo (MC) simulations and dosimetric measurements. To model and simulate the cardiac cath lab, including the fluoroscopic equipment used to execute interventional procedures, the state-of-the-art MC radiation transport code MCNPX 2.7.0 was used. Subsequently, Thermo-Luminescent Detector (TLD) measurements were performed, in order to validate and support the simulation results obtained for the cath lab model. The preliminary results presented in this study reveal that the cardiac cath lab model was successfully validated, taking into account the good agreement between MC calculations and TLD measurements. The simulated results for the isodose curves related to the C-arm fluoroscopic system are also consistent with the dosimetric information provided by the equipment manufacturer (Siemens). The adequacy of the implemented computational model used to simulate complex procedures and map dose distributions around the operator and the medical staff is discussed, in view of the optimization principle (and the associated ALARA objective), one of the pillars of the international system of radiological protection.
SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Tian, Z; Pompos, A
2016-06-15
Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondarymore » electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.« less
STS-107 Pilot William McCool in the cockpit of Columbia during TCDT
NASA Technical Reports Server (NTRS)
2002-01-01
KENNEDY SPACE CENTER, FLA. - STS-107 Pilot William 'Willie' McCool checks instructions in the cockpit of Space Shuttle Columbia during a simulated launch countdown, part of Terminal Countdown Demonstration Test activities. STS-107 is a mission devoted to research and will include more than 80 experiments that will study Earth and space science, advanced technology development, and astronaut health and safety. Launch is planned for Jan. 16, 2003, between 10 a.m. and 2 p.m. EST aboard Space Shuttle Columbia. .