Sample records for mc simulation results

  1. Validation of a Monte Carlo code system for grid evaluation with interference effect on Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Abel; White, Graeme L.; Davidson, Rob

    2018-02-01

    Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.

  2. SU-F-19A-05: Experimental and Monte Carlo Characterization of the 1 Cm CivaString 103Pd Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J; Micka, J; Culberson, W

    Purpose: To determine the in-air azimuthal anisotropy and in-water dose distribution for the 1 cm length of the CivaString {sup 103}Pd brachytherapy source through measurements and Monte Carlo (MC) simulations. American Association of Physicists in Medicine Task Group No. 43 (TG-43) dosimetry parameters were also determined for this source. Methods: The in-air azimuthal anisotropy of the source was measured with a NaI scintillation detector and simulated with the MCNP5 radiation transport code. Measured and simulated results were normalized to their respective mean values and compared. The TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function for this sourcemore » were determined from LiF:Mg,Ti thermoluminescent dosimeter (TLD) measurements and MC simulations. The impact of {sup 103}Pd well-loading variability on the in-water dose distribution was investigated using MC simulations by comparing the dose distribution for a source model with four wells of equal strength to that for a source model with strengths increased by 1% for two of the four wells. Results: NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy showed that ≥95% of the normalized data were within 1.2% of the mean value. TLD measurements and MC simulations of the TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function agreed to within the experimental TLD uncertainties (k=2). MC simulations showed that a 1% variability in {sup 103}Pd well-loading resulted in changes of <0.1%, <0.1%, and <0.3% in the TG-43 dose-rate constant, radial dose distribution, and polar dose distribution, respectively. Conclusion: The CivaString source has a high degree of azimuthal symmetry as indicated by the NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy. TG-43 dosimetry parameters for this source were determined from TLD measurements and MC simulations. {sup 103}Pd well-loading variability results in minimal variations in the in-water dose distribution according to MC simulations. This work was partially supported by CivaTech Oncology, Inc. through an educational grant for Joshua Reed, John Micka, Wesley Culberson, and Larry DeWerd and through research support for Mark Rivard.« less

  3. Integration of OpenMC methods into MAMMOTH and Serpent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie; DeHart, Mark; Tumulak, Aaron

    OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.

  4. Safe bunker designing for the 18 MV Varian 2100 Clinac: a comparison between Monte Carlo simulation based upon data and new protocol recommendations

    PubMed Central

    Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein

    2016-01-01

    Aim The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. Background High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. Materials and methods The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. Results From designed door's thickness, the door designed by the MC simulation and Wu–McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Conclusion Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations. PMID:26900357

  5. Safe bunker designing for the 18 MV Varian 2100 Clinac: a comparison between Monte Carlo simulation based upon data and new protocol recommendations.

    PubMed

    Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein

    2016-01-01

    The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. From designed door's thickness, the door designed by the MC simulation and Wu-McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations.

  6. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    PubMed

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  8. Development of water movement model as a module of moisture content simulation in static pile composting.

    PubMed

    Seng, Bunrith; Kaneko, Hidehiro; Hirayama, Kimiaki; Katayama-Hirayama, Keiko

    2012-01-01

    This paper presents a mathematical model of vertical water movement and a performance evaluation of the model in static pile composting operated with neither air supply nor turning. The vertical moisture content (MC) model was developed with consideration of evaporation (internal and external evaporation), diffusion (liquid and vapour diffusion) and percolation, whereas additional water from substrate decomposition and irrigation was not taken into account. The evaporation term in the model was established on the basis of reference evaporation of the materials at known temperature, MC and relative humidity of the air. Diffusion of water vapour was estimated as functions of relative humidity and temperature, whereas diffusion of liquid water was empirically obtained from experiment by adopting Fick's law. Percolation was estimated by following Darcy's law. The model was applied to a column of composting wood chips with an initial MC of 60%. The simulation program was run for four weeks with calculation span of 1 s. The simulated results were in reasonably good agreement with the experimental results. Only a top layer (less than 20 cm) had a considerable MC reduction; the deeper layers were comparable to the initial MC, and the bottom layer was higher than the initial MC. This model is a useful tool to estimate the MC profile throughout the composting period, and could be incorporated into biodegradation kinetic simulation of composting.

  9. Study on method to simulate light propagation on tissue with characteristics of radial-beam LED based on Monte-Carlo method.

    PubMed

    Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G

    2013-01-01

    In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation.

  10. A virtual source model for Monte Carlo simulation of helical tomotherapy.

    PubMed

    Yuan, Jiankui; Rong, Yi; Chen, Quan

    2015-01-08

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM-based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media.

  11. A virtual source model for Monte Carlo simulation of helical tomotherapy

    PubMed Central

    Yuan, Jiankui; Rong, Yi

    2015-01-01

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase‐space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS‐generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of <1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of <2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM‐based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose‐volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM‐based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media. PACS numbers: 87.53.‐j, 87.55.K‐ PMID:25679157

  12. Diffusion in confinement: kinetic simulations of self- and collective diffusion behavior of adsorbed gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abouelnasr, MKF; Smit, B

    2012-01-01

    The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) Amore » new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective-diffusion behavior that was reproduced with kMC simulations.« less

  13. Diffusion in confinement: kinetic simulations of self- and collective diffusion behavior of adsorbed gases.

    PubMed

    Abouelnasr, Mahmoud K F; Smit, Berend

    2012-09-07

    The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) A new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective- diffusion behavior that was reproduced with kMC simulations.

  14. TH-E-BRE-09: TrueBeam Monte Carlo Absolute Dose Calculations Using Monitor Chamber Backscatter Simulations and Linac-Logged Target Current

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A, Popescu I; Lobo, J; Sawkey, D

    2014-06-15

    Purpose: To simulate and measure radiation backscattered into the monitor chamber of a TrueBeam linac; establish a rigorous framework for absolute dose calculations for TrueBeam Monte Carlo (MC) simulations through a novel approach, taking into account the backscattered radiation and the actual machine output during beam delivery; improve agreement between measured and simulated relative output factors. Methods: The ‘monitor backscatter factor’ is an essential ingredient of a well-established MC absolute dose formalism (the MC equivalent of the TG-51 protocol). This quantity was determined for the 6 MV, 6X FFF, and 10X FFF beams by two independent Methods: (1) MC simulationsmore » in the monitor chamber of the TrueBeam linac; (2) linac-generated beam record data for target current, logged for each beam delivery. Upper head MC simulations used a freelyavailable manufacturer-provided interface to a cloud-based platform, allowing use of the same head model as that used to generate the publicly-available TrueBeam phase spaces, without revealing the upper head design. The MC absolute dose formalism was expanded to allow direct use of target current data. Results: The relation between backscatter, number of electrons incident on the target for one monitor unit, and MC absolute dose was analyzed for open fields, as well as a jaw-tracking VMAT plan. The agreement between the two methods was better than 0.15%. It was demonstrated that the agreement between measured and simulated relative output factors improves across all field sizes when backscatter is taken into account. Conclusion: For the first time, simulated monitor chamber dose and measured target current for an actual TrueBeam linac were incorporated in the MC absolute dose formalism. In conjunction with the use of MC inputs generated from post-delivery trajectory-log files, the present method allows accurate MC dose calculations, without resorting to any of the simplifying assumptions previously made in the TrueBeam MC literature. This work has been partially funded by Varian Medical Systems.« less

  15. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Gao, M

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less

  16. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less

  17. Development of a Prototype Automation Simulation Scenario Generator for Air Traffic Management Software Simulations

    NASA Technical Reports Server (NTRS)

    Khambatta, Cyrus F.

    2007-01-01

    A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.

  18. Performance prediction for silicon photonics integrated circuits with layout-dependent correlated manufacturing variability.

    PubMed

    Lu, Zeqin; Jhoja, Jaspreet; Klein, Jackson; Wang, Xu; Liu, Amy; Flueckiger, Jonas; Pond, James; Chrostowski, Lukas

    2017-05-01

    This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.

  19. Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur

    2012-01-01

    One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF

  20. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC

  1. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework

    PubMed Central

    Dunkerley, David A. P.; Tomkowiak, Michael T.; Slagowski, Jordan M.; McCabe, Bradley P.; Funk, Tobias; Speidel, Michael A.

    2015-01-01

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8–6.4% (18.6–31.5 cm acrylic, 100 kV), versus 2.1–4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems. PMID:26113765

  2. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework.

    PubMed

    Dunkerley, David A P; Tomkowiak, Michael T; Slagowski, Jordan M; McCabe, Bradley P; Funk, Tobias; Speidel, Michael A

    2015-02-21

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8-6.4% (18.6-31.5 cm acrylic, 100 kV), versus 2.1-4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems.

  3. Simulating x-ray telescopes with McXtrace: a case study of ATHENA's optics

    NASA Astrophysics Data System (ADS)

    Ferreira, Desiree D. M.; Knudsen, Erik B.; Westergaard, Niels J.; Christensen, Finn E.; Massahi, Sonny; Shortt, Brian; Spiga, Daniele; Solstad, Mathias; Lefmann, Kim

    2016-07-01

    We use the X-ray ray-tracing package McXtrace to simulate the performance of X-ray telescopes based on Silicon Pore Optics (SPO) technologies. We use as reference the design of the optics of the planned X-ray mission Advanced Telescope for High ENergy Astrophysics (ATHENA) which is designed as a single X-ray telescope populated with stacked SPO substrates forming mirror modules to focus X-ray photons. We show that is possible to simulate in detail the SPO pores and qualify the use of McXtrace for in-depth analysis of in-orbit performance and laboratory X-ray test results.

  4. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Häggström, Ida, E-mail: haeggsti@mskcc.org; Beattie, Bradley J.; Schmidtlein, C. Ross

    2016-06-15

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuationmore » are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dPETSTEP can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.« less

  5. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  6. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms

    NASA Astrophysics Data System (ADS)

    Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-08-01

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  7. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms.

    PubMed

    Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-07-17

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  8. Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States

    PubMed Central

    Orio, Patricio; Soudry, Daniel

    2012-01-01

    Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320

  9. Validation of Shielding Analysis Capability of SuperMC with SINBAD

    NASA Astrophysics Data System (ADS)

    Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing

    2017-09-01

    Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.

  10. Comparison of film measurements and Monte Carlo simulations of dose delivered with very high-energy electron beams in a polystyrene phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazalova-Carter, Magdalena; Liu, Michael; Palma, Bianey

    2015-04-15

    Purpose: To measure radiation dose in a water-equivalent medium from very high-energy electron (VHEE) beams and make comparisons to Monte Carlo (MC) simulation results. Methods: Dose in a polystyrene phantom delivered by an experimental VHEE beam line was measured with Gafchromic films for three 50 MeV and two 70 MeV Gaussian beams of 4.0–6.9 mm FWHM and compared to corresponding MC-simulated dose distributions. MC dose in the polystyrene phantom was calculated with the EGSnrc/BEAMnrc and DOSXYZnrc codes based on the experimental setup. Additionally, the effect of 2% beam energy measurement uncertainty and possible non-zero beam angular spread on MC dosemore » distributions was evaluated. Results: MC simulated percentage depth dose (PDD) curves agreed with measurements within 4% for all beam sizes at both 50 and 70 MeV VHEE beams. Central axis PDD at 8 cm depth ranged from 14% to 19% for the 5.4–6.9 mm 50 MeV beams and it ranged from 14% to 18% for the 4.0–4.5 mm 70 MeV beams. MC simulated relative beam profiles of regularly shaped Gaussian beams evaluated at depths of 0.64 to 7.46 cm agreed with measurements to within 5%. A 2% beam energy uncertainty and 0.286° beam angular spread corresponded to a maximum 3.0% and 3.8% difference in depth dose curves of the 50 and 70 MeV electron beams, respectively. Absolute dose differences between MC simulations and film measurements of regularly shaped Gaussian beams were between 10% and 42%. Conclusions: The authors demonstrate that relative dose distributions for VHEE beams of 50–70 MeV can be measured with Gafchromic films and modeled with Monte Carlo simulations to an accuracy of 5%. The reported absolute dose differences likely caused by imperfect beam steering and subsequent charge loss revealed the importance of accurate VHEE beam control and diagnostics.« less

  11. Monte Carlo decision curve analysis using aggregate data.

    PubMed

    Hozo, Iztok; Tsalatsanis, Athanasios; Djulbegovic, Benjamin

    2017-02-01

    Decision curve analysis (DCA) is an increasingly used method for evaluating diagnostic tests and predictive models, but its application requires individual patient data. The Monte Carlo (MC) method can be used to simulate probabilities and outcomes of individual patients and offers an attractive option for application of DCA. We constructed a MC decision model to simulate individual probabilities of outcomes of interest. These probabilities were contrasted against the threshold probability at which a decision-maker is indifferent between key management strategies: treat all, treat none or use predictive model to guide treatment. We compared the results of DCA with MC simulated data against the results of DCA based on actual individual patient data for three decision models published in the literature: (i) statins for primary prevention of cardiovascular disease, (ii) hospice referral for terminally ill patients and (iii) prostate cancer surgery. The results of MC DCA and patient data DCA were identical. To the extent that patient data DCA were used to inform decisions about statin use, referral to hospice or prostate surgery, the results indicate that MC DCA could have also been used. As long as the aggregate parameters on distribution of the probability of outcomes and treatment effects are accurately described in the published reports, the MC DCA will generate indistinguishable results from individual patient data DCA. We provide a simple, easy-to-use model, which can facilitate wider use of DCA and better evaluation of diagnostic tests and predictive models that rely only on aggregate data reported in the literature. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.

  12. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.

  13. OpenMC In Situ Source Convergence Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less

  14. SU-F-T-156: Monte Carlo Simulation Using TOPAS for Synchrotron Based Proton Discrete Spot Scanning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moskvin, V; Pirlepesov, F; Tsiamas, P

    Purpose: This study provides an overview of the design and commissioning of the Monte Carlo (MC) model of the spot-scanning proton therapy nozzle and its implementation for the patient plan simulation. Methods: The Hitachi PROBEAT V scanning nozzle was simulated based on vendor specifications using the TOPAS extension of Geant4 code. FLUKA MC simulation was also utilized to provide supporting data for the main simulation. Validation of the MC model was performed using vendor provided data and measurements collected during acceptance/commissioning of the proton therapy machine. Actual patient plans using CT based treatment geometry were simulated and compared to themore » dose distributions produced by the treatment planning system (Varian Eclipse 13.6), and patient quality assurance measurements. In-house MATLAB scripts are used for converting DICOM data into TOPAS input files. Results: Comparison analysis of integrated depth doses (IDDs), therapeutic ranges (R90), and spot shape/sizes at different distances from the isocenter, indicate good agreement between MC and measurements. R90 agreement is within 0.15 mm across all energy tunes. IDDs and spot shapes/sizes differences are within statistical error of simulation (less than 1.5%). The MC simulated data, validated with physical measurements, were used for the commissioning of the treatment planning system. Patient geometry simulations were conducted based on the Eclipse produced DICOM plans. Conclusion: The treatment nozzle and standard option beam model were implemented in the TOPAS framework to simulate a highly conformal discrete spot-scanning proton beam system.« less

  15. Validation of total skin electron irradiation (TSEI) technique dosimetry data by Monte Carlo simulation

    PubMed Central

    Borzov, Egor; Daniel, Shahar; Bar‐Deroma, Raquel

    2016-01-01

    Total skin electron irradiation (TSEI) is a complex technique which requires many nonstandard measurements and dosimetric procedures. The purpose of this work was to validate measured dosimetry data by Monte Carlo (MC) simulations using EGSnrc‐based codes (BEAMnrc and DOSXYZnrc). Our MC simulations consisted of two major steps. In the first step, the incident electron beam parameters (energy spectrum, FWHM, mean angular spread) were adjusted to match the measured data (PDD and profile) at SSD=100 cm for an open field. In the second step, these parameters were used to calculate dose distributions at the treatment distance of 400 cm. MC simulations of dose distributions from single and dual fields at the treatment distance were performed in a water phantom. Dose distribution from the full treatment with six dual fields was simulated in a CT‐based anthropomorphic phantom. MC calculations were compared to the available set of measurements used in clinical practice. For one direct field, MC calculated PDDs agreed within 3%/1 mm with the measurements, and lateral profiles agreed within 3% with the measured data. For the OF, the measured and calculated results were within 2% agreement. The optimal angle of 17° was confirmed for the dual field setup. Dose distribution from the full treatment with six dual fields was simulated in a CT‐based anthropomorphic phantom. The MC‐calculated multiplication factor (B12‐factor), which relates the skin dose for the whole treatment to the dose from one calibration field, for setups with and without degrader was 2.9 and 2.8, respectively. The measured B12‐factor was 2.8 for both setups. The difference between calculated and measured values was within 3.5%. It was found that a degrader provides more homogeneous dose distribution. The measured X‐ray contamination for the full treatment was 0.4%; this is compared to the 0.5% X‐ray contamination obtained with the MC calculation. Feasibility of MC simulation in an anthropomorphic phantom for a full TSEI treatment was proved and is reported for the first time in the literature. The results of our MC calculations were found to be in general agreement with the measurements, providing a promising tool for further studies of dose distribution calculations in TSEI. PACS number(s): 87.10. Rt, 87.55.K, 87.55.ne PMID:27455502

  16. Enabling Microscopic Simulators to Perform System Level Tasks: A System-Identification Based, Closure-on-Demand Toolkit for Multiscale Simulation Stability/Bifurcation Analysis, Optimization and Control

    DTIC Science & Technology

    2006-10-01

    The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W

  17. TU-H-CAMPUS-IeP1-04: Combined Organ Dose for Digital Subtraction Angiography and Computed Tomography Using Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakabe, D; Ohno, T; Araki, F

    Purpose: The purpose of this study was to evaluate the combined organ dose of digital subtraction angiography (DSA) and computed tomography (CT) using a Monte Carlo (MC) simulation on the abdominal intervention. Methods: The organ doses for DSA and CT were obtained with MC simulation and actual measurements using fluorescent-glass dosimeters at 7 abdominal portions in an Alderson-Rando phantom. DSA was performed from three directions: posterior anterior (PA), right anterior oblique (RAO), and left anterior oblique (LAO). The organ dose with MC simulation was compared with actual radiation dose measurements. Calculations for the MC simulation were carried out with themore » GMctdospp (IMPS, Germany) software based on the EGSnrc MC code. Finally, the combined organ dose for DSA and CT was calculated from the MC simulation using the X-ray conditions of a patient with a diagnosis of hepatocellular carcinoma. Results: For DSA from the PA direction, the organ doses for the actual measurements and MC simulation were 2.2 and 2.4 mGy/100 mAs at the liver, respectively, and 3.0 and 3.1 mGy/100 mAs at the spinal cord, while for CT, the organ doses were 15.2 and 15.1 mGy/100 mAs at the liver, and 14.6 and 13.5 mGy/100 mAs at the spinal cord. The maximum difference in organ dose between the actual measurements and the MC simulation was 11.0% of the spleen at PA, 8.2% of the spinal cord at RAO, and 6.1% of left kidney at LAO with DSA and 9.3% of the stomach with CT. The combined organ dose (4 DSAs and 6 CT scans) with the use of actual patient conditions was found to be 197.4 mGy for the liver and 205.1 mGy for the spinal cord. Conclusion: Our method makes it possible to accurately assess the organ dose to patients for abdominal intervention with combined DSA and CT.« less

  18. a Model to Simulate the Radiative Transfer of Fluorescence in a Leaf

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Ni, Q.

    2018-04-01

    Light is reflected, transmitted and absorbed by green leaves. Chlorophyll fluorescence (ChlF) is the signal emitted by chlorophyll molecules in the leaf after the absorption of light. ChlF can be used as a direct probe of the functional status of photosynthetic machinery because of its close relationship with photosynthesis. The scattering, absorbing, and emitting properties of leaves are spectrally dependent, which can be simulated by modeling leaf-level fluorescence. In this paper, we proposed a Monte-Carlo (MC) model to simulate the radiative transfer of photons in the leaf. Results show that typical leaf fluorescence spectra can be properly simulated, with two peaks centered at around 685 nm in the red and 740 nm in the far-red regions. By analysing the sensitivity of the input parameters, we found the MC model can well simulate their influence on the emitted fluorescence. Meanwhile we compared results simulated by MC model with those by the Fluspect model. Generally they agree well in the far-red region but deviate in the red region.

  19. Kinetic Monte Carlo (kMC) simulation of carbon co-implant on pre-amorphization process.

    PubMed

    Park, Soonyeol; Cho, Bumgoo; Yang, Seungsu; Won, Taeyoung

    2010-05-01

    We report our kinetic Monte Carlo (kMC) study of the effect of carbon co-implant on the pre-amorphization implant (PAL) process. We employed BCA (Binary Collision Approximation) approach for the acquisition of the initial as-implant dopant profile and kMC method for the simulation of diffusion process during the annealing process. The simulation results implied that carbon co-implant suppresses the boron diffusion due to the recombination with interstitials. Also, we could compare the boron diffusion with carbon diffusion by calculating carbon reaction with interstitial. And we can find that boron diffusion is affected from the carbon co-implant energy by enhancing the trapping of interstitial between boron and interstitial.

  20. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    PubMed

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  1. FF12MC: A revised AMBER forcefield and new protein simulation protocol

    PubMed Central

    2016-01-01

    ABSTRACT Specialized to simulate proteins in molecular dynamics (MD) simulations with explicit solvation, FF12MC is a combination of a new protein simulation protocol employing uniformly reduced atomic masses by tenfold and a revised AMBER forcefield FF99 with (i) shortened C—H bonds, (ii) removal of torsions involving a nonperipheral sp3 atom, and (iii) reduced 1–4 interaction scaling factors of torsions ϕ and ψ. This article reports that in multiple, distinct, independent, unrestricted, unbiased, isobaric–isothermal, and classical MD simulations FF12MC can (i) simulate the experimentally observed flipping between left‐ and right‐handed configurations for C14–C38 of BPTI in solution, (ii) autonomously fold chignolin, CLN025, and Trp‐cage with folding times that agree with the experimental values, (iii) simulate subsequent unfolding and refolding of these miniproteins, and (iv) achieve a robust Z score of 1.33 for refining protein models TMR01, TMR04, and TMR07. By comparison, the latest general‐purpose AMBER forcefield FF14SB locks the C14–C38 bond to the right‐handed configuration in solution under the same protein simulation conditions. Statistical survival analysis shows that FF12MC folds chignolin and CLN025 in isobaric–isothermal MD simulations 2–4 times faster than FF14SB under the same protein simulation conditions. These results suggest that FF12MC may be used for protein simulations to study kinetics and thermodynamics of miniprotein folding as well as protein structure and dynamics. Proteins 2016; 84:1490–1516. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27348292

  2. Estimation of absorbed doses from paediatric cone-beam CT scans: MOSFET measurements and Monte Carlo simulations.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang

    2010-03-01

    The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.

  3. Improved importance sampling technique for efficient simulation of digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  4. Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation

    NASA Astrophysics Data System (ADS)

    Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong

    2018-02-01

    Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics.

  5. MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pater, P; Vallieres, M; Seuntjens, J

    2014-06-15

    Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dosemore » deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)« less

  6. The effect of glycerin solution density and viscosity on vibration amplitude of oblique different piezoelectric MC near the surface in 3D modeling

    NASA Astrophysics Data System (ADS)

    Korayem, A. H.; Abdi, M.; Korayem, M. H.

    2018-06-01

    The surface topography in nanoscale is one of the most important applications of AFM. The analysis of piezoelectric microcantilevers vibration behavior is essential to improve the AFM performance. To this end, one of the appropriate methods to simulate the dynamic behavior of microcantilever (MC) is a numerical solution with FEM in the 3D modeling using COMSOL software. The present study aims to simulate different geometries of the four-layered AFM piezoelectric MCs in 2D and 3D modeling in a liquid medium using COMSOL software. The 3D simulation was done in a spherical container using FSI domain in COMSOL. In 2D modeling by applying Hamilton's Principle based on Euler-Bernoulli Beam theory, the governing motion equation was derived and discretized with FEM. In this mode, the hydrodynamic force was assumed with a string of spheres. The effect of this force along with the squeezed-film force was considered on MC equations. The effect of fluid density and viscosity on the MC vibrations that immersed in different glycerin solutions was investigated in 2D and 3D modes and the results were compared with the experimental results. The frequencies and time responses of MC close to the surface were obtained considering tip-sample forces. The surface topography of MCs different geometries were compared in the liquid medium and the comparison was done in both tapping and non-contact mode. Various types of surface roughness were considered in the topography for MC different geometries. Also, the effect of geometric dimensions on the surface topography was investigated. In liquid medium, MC is installed at an oblique position to avoid damaging the MC due to the squeezed-film force in the vicinity of MC surface. Finally, the effect of MC's angle on surface topography and time response of the system was investigated.

  7. Numerical Simulation on a Possible Formation Mechanism of Interplanetary Magnetic Cloud Boundaries

    NASA Astrophysics Data System (ADS)

    Fan, Quan-Lin; Wei, Feng-Si; Feng, Xue-Shang

    2003-08-01

    The formation mechanism of the interplanetary magnetic cloud (MC) boundaries is numerically investigated by simulating the interactions between an MC of some initial momentum and a local interplanetary current sheet. The compressible 2.5D MHD equations are solved. Results show that the magnetic reconnection process is a possible formation mechanism when an MC interacts with a surrounding current sheet. A number of interesting features are found. For instance, the front boundary of the MCs is a magnetic reconnection boundary that could be caused by a driven reconnection ahead of the cloud, and the tail boundary might be caused by the driving of the entrained flow as a result of the Bernoulli principle. Analysis of the magnetic field and plasma data demonstrates that at these two boundaries appear large value of the plasma parameter β, clear increase of plasma temperature and density, distinct decrease of magnetic magnitude, and a transition of magnetic field direction of about 180 degrees. The outcome of the present simulation agrees qualitatively with the observational results on MC boundary inferred from IMP-8, etc. The project supported by National Natural Science Foundation of China under Grant Nos. 40104006, 49925412, and 49990450

  8. Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2005-01-01

    Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.

  9. Universal aspects of conformations and transverse fluctuations of a two-dimensional semi-flexible chain

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Huang, Aiqun; Bhattacharya, Aniket; Binder, Kurt

    2015-03-01

    In this talk we compare the results obtained from Monte Carlo (MC) and Brownian dynamics (BD) simulation for the universal properties of a semi-flexible chain. Specifically we compare MC results obtained using pruned-enriched Rosenbluth method (PERM) with those obtained from BD simulation. We find that the scaled plot of root-mean-square (RMS) end-to-end distance / 2 Llp and RMS transverse transverse fluctuations √{ } /lp as a function of L /lp (where L and lp are the contour length, and the persistence length respectively) are universal and independent of the definition of the persistence length used in MC and BD schemes. We further investigate to what extent these results agree for a semi-flexible polymer confined in a quasi one dimensional channel.

  10. Methods for Monte Carlo simulations of biomacromolecules

    PubMed Central

    Vitalis, Andreas; Pappu, Rohit V.

    2010-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473

  11. Concepts for dose determination in flat-detector CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Yiannis; Deak, Paul; Langner, Oliver; Kalender, Willi A.

    2008-07-01

    Flat-detector computed tomography (FD-CT) scanners provide large irradiation fields of typically 200 mm in the cranio-caudal direction. In consequence, dose assessment according to the current definition of the computed tomography dose index CTDIL=100 mm, where L is the integration length, would demand larger ionization chambers and phantoms which do not appear practical. We investigated the usefulness of the CTDI concept and practical dosimetry approaches for FD-CT by measurements and Monte Carlo (MC) simulations. An MC simulation tool (ImpactMC, VAMP GmbH, Erlangen, Germany) was used to assess the dose characteristics and was calibrated with measurements of air kerma. For validation purposes measurements were performed on an Axiom Artis C-arm system (Siemens Medical Solutions, Forchheim, Germany) equipped with a flat detector of 40 cm × 30 cm. The dose was assessed for 70 kV and 125 kV in cylindrical PMMA phantoms of 160 mm and 320 mm diameter with a varying phantom length from 150 to 900 mm. MC simulation results were compared to the values obtained with a calibrated ionization chambers of 100 mm and 250 mm length and to thermoluminesence (TLD) dose profiles. The MCs simulations were used to calculate the efficiency of the CTDIL determination with respect to the desired CTDI∞. Both the MC simulation results and the dose distributions obtained by MC simulation were in very good agreement with the CTDI measurements and with the reference TLD profiles, respectively, to within 5%. Standard CTDI phantoms which have a z-extent of 150 mm underestimate the dose at the center by up to 55%, whereas a z-extent of >=600 mm appears to be sufficient for FD-CT; the baseline value of the respective profile was within 1% to the reference baseline. As expected, the measurements with ionization chambers of 100 mm and 250 mm offer a limited accuracy, whereas an increased integration length of >=600 mm appeared to be necessary to approximate CTDI∞ in within 1%. MC simulations appear to offer a practical and accurate way of assessing conversion factors for arbitrary dosimetry setups using a standard pencil chamber to provide estimates of CTDI∞. This would eliminate the need for extra-long phantoms and ionization chambers or excessive amounts of TLDs.

  12. Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation.

    PubMed

    Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna

    2018-04-28

    We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ϵ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].

  13. Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation

    NASA Astrophysics Data System (ADS)

    Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna

    2018-04-01

    We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ɛ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].

  14. Improving the capability of an integrated CA-Markov model to simulate spatio-temporal urban growth trends using an Analytical Hierarchy Process and Frequency Ratio

    NASA Astrophysics Data System (ADS)

    Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan

    2017-07-01

    The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.

  15. Game of Life on the Equal Degree Random Lattice

    NASA Astrophysics Data System (ADS)

    Shao, Zhi-Gang; Chen, Tao

    2010-12-01

    An effective matrix method is performed to build the equal degree random (EDR) lattice, and then a cellular automaton game of life on the EDR lattice is studied by Monte Carlo (MC) simulation. The standard mean field approximation (MFA) is applied, and then the density of live cells is given ρ=0.37017 by MFA, which is consistent with the result ρ=0.37±0.003 by MC simulation.

  16. Simulating adsorptive expansion of zeolites: application to biomass-derived solutions in contact with silicalite.

    PubMed

    Santander, Julian E; Tsapatsis, Michael; Auerbach, Scott M

    2013-04-16

    We have constructed and applied an algorithm to simulate the behavior of zeolite frameworks during liquid adsorption. We applied this approach to compute the adsorption isotherms of furfural-water and hydroxymethyl furfural (HMF)-water mixtures adsorbing in silicalite zeolite at 300 K for comparison with experimental data. We modeled these adsorption processes under two different statistical mechanical ensembles: the grand canonical (V-Nz-μg-T or GC) ensemble keeping volume fixed, and the P-Nz-μg-T (osmotic) ensemble allowing volume to fluctuate. To optimize accuracy and efficiency, we compared pure Monte Carlo (MC) sampling to hybrid MC-molecular dynamics (MD) simulations. For the external furfural-water and HMF-water phases, we assumed the ideal solution approximation and employed a combination of tabulated data and extended ensemble simulations for computing solvation free energies. We found that MC sampling in the V-Nz-μg-T ensemble (i.e., standard GCMC) does a poor job of reproducing both the Henry's law regime and the saturation loadings of these systems. Hybrid MC-MD sampling of the V-Nz-μg-T ensemble, which includes framework vibrations at fixed total volume, provides better results in the Henry's law region, but this approach still does not reproduce experimental saturation loadings. Pure MC sampling of the osmotic ensemble was found to approach experimental saturation loadings more closely, whereas hybrid MC-MD sampling of the osmotic ensemble quantitatively reproduces such loadings because the MC-MD approach naturally allows for locally anisotropic volume changes wherein some pores expand whereas others contract.

  17. Scaling up watershed model parameters: flow and load simulations of the Edisto River Basin, South Carolina, 2007-09

    USGS Publications Warehouse

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    As part of an ongoing effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin, analyses and simulations of the hydrology of the Edisto River Basin were made using the topography-based hydrological model (TOPMODEL). A primary focus of the investigation was to assess the potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River Basin. Scaling up was done in a step-wise manner, beginning with applying the calibration parameters, meteorological data, and topographic-wetness-index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made for subsequent simulations, culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River Basin and updated calibration parameters for some of the TOPMODEL calibration parameters. The scaling-up process resulted in nine simulations being made. Simulation 7 best matched the streamflows at station 02175000, Edisto River near Givhans, SC, which was the downstream limit for the TOPMODEL setup, and was obtained by adjusting the scaling factor, including streamflow routing, and using NEXRAD precipitation data for the Edisto River Basin. The Nash-Sutcliffe coefficient of model-fit efficiency and Pearson’s correlation coefficient for simulation 7 were 0.78 and 0.89, respectively. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the McTier Creek and Edisto River models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the substantial difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variable in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD–H, and LOADEST. Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek Basin were used in the water-quality load models.

  18. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  19. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  20. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  1. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations

    PubMed Central

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775

  2. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations.

    PubMed

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.

  3. Influence of photon energy cuts on PET Monte Carlo simulation results.

    PubMed

    Mitev, Krasimir; Gerganov, Georgi; Kirov, Assen S; Schmidtlein, C Ross; Madzhunkov, Yordan; Kawrakow, Iwan

    2012-07-01

    The purpose of this work is to study the influence of photon energy cuts on the results of positron emission tomography (PET) Monte Carlo (MC) simulations. MC simulations of PET scans of a box phantom and the NEMA image quality phantom are performed for 32 photon energy cut values in the interval 0.3-350 keV using a well-validated numerical model of a PET scanner. The simulations are performed with two MC codes, egs_pet and GEANT4 Application for Tomographic Emission (GATE). The effect of photon energy cuts on the recorded number of singles, primary, scattered, random, and total coincidences as well as on the simulation time and noise-equivalent count rate is evaluated by comparing the results for higher cuts to those for 1 keV cut. To evaluate the effect of cuts on the quality of reconstructed images, MC generated sinograms of PET scans of the NEMA image quality phantom are reconstructed with iterative statistical reconstruction. The effects of photon cuts on the contrast recovery coefficients and on the comparison of images by means of commonly used similarity measures are studied. For the scanner investigated in this study, which uses bismuth germanate crystals, the transport of Bi X(K) rays must be simulated in order to obtain unbiased estimates for the number of singles, true, scattered, and random coincidences as well as for an unbiased estimate of the noise-equivalent count rate. Photon energy cuts higher than 170 keV lead to absorption of Compton scattered photons and strongly increase the number of recorded coincidences of all types and the noise-equivalent count rate. The effect of photon cuts on the reconstructed images and the similarity measures used for their comparison is statistically significant for very high cuts (e.g., 350 keV). The simulation time decreases slowly with the increase of the photon cut. The simulation of the transport of characteristic x rays plays an important role, if an accurate modeling of a PET scanner system is to be achieved. The simulation time decreases slowly with the increase of the cut which, combined with the accuracy loss at high cuts, means that the usage of high photon energy cuts is not recommended for the acceleration of MC simulations.

  4. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.

    PubMed

    Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe

    2015-08-07

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  5. Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation.

    PubMed

    Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong

    2018-02-01

    Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  6. The Monte Carlo simulation of the Borexino detector

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.

    2018-01-01

    We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.

  7. Monte Carlo simulations on atropisomerism of thienotriazolodiazepines applicable to slow transition phenomena using potential energy surfaces by ab initio molecular orbital calculations.

    PubMed

    Morikami, Kenji; Itezono, Yoshiko; Nishimoto, Masahiro; Ohta, Masateru

    2014-01-01

    Compounds with a medium-sized flexible ring often show atropisomerism that is caused by the high-energy barriers between long-lived conformers that can be isolated and often have different biological properties to each other. In this study, the frequency of the transition between the two stable conformers, aS and aR, of thienotriazolodiazepine compounds with flexible 7-membered rings was estimated computationally by Monte Carlo (MC) simulations and validated experimentally by NMR experiments. To estimate the energy barriers for transitions as precisely as possible, the potential energy (PE) surfaces used in the MC simulations were calculated by molecular orbital (MO) methods. To accomplish the MC simulations with the MO-based PE surfaces in a practical central processing unit (CPU) time, the MO-based PE of each conformer was pre-calculated and stored before the MC simulations, and then only referred to during the MC simulations. The activation energies for transitions calculated by the MC simulations agreed well with the experimental ΔG determined by the NMR experiments. The analysis of the transition trajectories of the MC simulations revealed that the transition occurred not only through the transition states, but also through many different transition paths. Our computational methods gave us quantitative estimates of atropisomerism of the thienotriazolodiazepine compounds in a practical period of time, and the method could be applicable for other slow-dynamics phenomena that cannot be investigated by other atomistic simulations.

  8. The impact of the diurnal cycle on the propagation of Madden-Julian Oscillation convection across the Maritime Continent

    DOE PAGES

    Hagos, Samson M.; Zhang, Chidong; Feng, Zhe; ...

    2016-09-19

    Influences of the diurnal cycle of convection on the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC) are examined using cloud-permitting regional model simulations and observations. A pair of ensembles of control (CONTROL) and no-diurnal cycle (NODC) simulations of the November 2011 MJO episode are performed. In the CONTROL simulations, the MJO signal is weakened as it propagates across the MC, with much of the convection stalling over the large islands of Sumatra and Borneo. In the NODC simulations, where the incoming shortwave radiation at the top of the atmosphere is maintained at its daily mean value,more » the MJO signal propagating across the MC is enhanced. Examination of the surface energy fluxes in the simulations indicates that in the presence of the diurnal cycle, surface downwelling shortwave radiation in CONTROL simulations is larger because clouds preferentially form in the afternoon. Furthermore, the diurnal co-variability of surface wind speed and skin temperature results in a larger sensible heat flux and a cooler land surface in CONTROL compared to NODC simulations. Here, an analysis of observations indicates that the modulation of the downwelling shortwave radiation at the surface by the diurnal cycle of cloudiness negatively projects on the MJO intraseasonal cycle and therefore disrupts the propagation of the MJO across the MC.« less

  9. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Bai, T

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less

  10. SU-G-BRC-10: Feasibility of a Web-Based Monte Carlo Simulation Tool for Dynamic Electron Arc Radiotherapy (DEAR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigues, A; Wu, Q; Sawkey, D

    Purpose: DEAR is a radiation therapy technique utilizing synchronized motion of gantry and couch during delivery to optimize dose distribution homogeneity and penumbra for treatment of superficial disease. Dose calculation for DEAR is not yet supported by commercial TPSs. The purpose of this study is to demonstrate the feasibility of using a web-based Monte Carlo (MC) simulation tool (VirtuaLinac) to calculate dose distributions for a DEAR delivery. Methods: MC simulations were run through VirtuaLinac, which is based on the GEANT4 platform. VirtuaLinac utilizes detailed linac head geometry and material models, validated phase space files, and a voxelized phantom. The inputmore » was expanded to include an XML file for simulation of varying mechanical axes as a function of MU. A DEAR XML plan was generated and used in the MC simulation and delivered on a TrueBeam in Developer Mode. Radiographic film wrapped on a cylindrical phantom (12.5 cm radius) measured dose at a depth of 1.5 cm and compared to the simulation results. Results: A DEAR plan was simulated using an energy of 6 MeV and a 3×10 cm{sup 2} cut-out in a 15×15 cm{sup 2} applicator for a delivery of a 90° arc. The resulting data were found to provide qualitative and quantitative evidence that the simulation platform could be used as the basis for DEAR dose calculations. The resulting unwrapped 2D dose distributions agreed well in the cross-plane direction along the arc, with field sizes of 18.4 and 18.2 cm and penumbrae of 1.9 and 2.0 cm for measurements and simulations, respectively. Conclusion: Preliminary feasibility of a DEAR delivery using a web-based MC simulation platform has been demonstrated. This tool will benefit treatment planning for DEAR as a benchmark for developing other model based algorithms, allowing efficient optimization of trajectories, and quality assurance of plans without the need for extensive measurements.« less

  11. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    NASA Astrophysics Data System (ADS)

    Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul

    2017-05-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.

  12. Design, Evaluation and GCM-Performance of a New Parameterization for Microphysics of Clouds with Relaxed Arakawa-Schubert Scheme (McRas)

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Walker, G. K.

    1998-01-01

    A prognostic cloud scheme named McRAS (Microphysics of clouds with Relaxed Arakawa-Schubert Scheme) was developed with the aim of improving cloud-microphysics, and cloud-radiation interactions in GCMs. McRAS distinguishes convective, stratiform, and boundary-layer clouds. The convective clouds merge into stratiform clouds on an hourly time-scale, while the boundary-layer clouds do so instantly. The cloud condensate transforms into precipitation following the auto-conversion relations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, and diffuse both horizontally and vertically with a fully active cloud-microphysics throughout its life-cycle, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry. An evaluation of McRAS in a single column model (SCM) with the GATE Phase III data has shown that McRAS can simulate the observed temperature, humidity, and precipitation without discernible systematic errors. An evaluation with the ARM-CART SCM data in a cloud model intercomparison exercise shows reasonable but not an outstanding accurate simulation. Such a discrepancy is common to almost all models and is related, in part, to the input data quality. McRAS was implemented in the GEOS II GCM. A 50 month integration that was initialized with the ECMWF analysis of observations for January 1, 1987 and forced with the observed sea-surface temperatures and sea-ice distribution and vegetation properties (biomes, and soils), with prognostic soil moisture, snow-cover, and hydrology showed a very realistic simulation of cloud process, incloud water and ice, and cloud-radiative forcing (CRF). The simulated ITCZ showed a realistic time-mean structure and seasonal cycle, while the simulated CRF showed sensitivity to vertical distribution of cloud water which can be easily altered by the choice of time constant and incloud critical cloud water amount regulators for auto-conversion. The CRF and its feedbacks also have a profound effect on the ITCZ. Even though somewhat weaker than observed, the McRAS-GCM simulation produces robust 30-60 day oscillations in the 200 hPa velocity potential. Two ensembles of 4-summer (July, August, September) simulations, one each for 1987 and 1988 show that the McRAS-GCM simulates realistic and statistically significant precipitation differences over India, Central America, and tropical Africa. Several seasonal simulations were performed with McRAS-GEOS II GCM for the summer (June-July- August) and winter (December-January-February) periods to determine how the simulated clouds and CRFs would be affected by: i) advection of clouds; ii) cloud top entrainment instability, iii) cloud water inhomogeneity correction, and (iv) cloud production and dissipation in different cloud-processes. The results show that each of these processes contributes to the simulated cloud-fraction and CRF.

  13. Competitive Adsorption and Ordered Packing of Counterions near Highly Charged Surfaces: From Mean-Field Theory to Monte Carlo Simulations

    PubMed Central

    Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo

    2013-01-01

    Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson’s equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both of the mean-field theory and MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling. PMID:22680474

  14. Competitive adsorption and ordered packing of counterions near highly charged surfaces: From mean-field theory to Monte Carlo simulations.

    PubMed

    Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo

    2012-04-01

    Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect-included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson's equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both the mean-field theory and the MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling.

  15. A Coarse Grained Model for Methylcellulose: Spontaneous Ring Formation at Elevated Temperature

    NASA Astrophysics Data System (ADS)

    Huang, Wenjun; Larson, Ronald

    Methylcellulose (MC) is widely used as food additives and pharma applications, where its thermo-reversible gelation behavior plays an important role. To date the gelation mechanism is not well understood, and therefore attracts great research interest. In this study, we adopted coarse-grained (CG) molecular dynamics simulations to model the MC chains, including the homopolymers and random copolymers that models commercial METHOCEL A, in an implicit water environment, where each MC monomer modeled with a single bead. The simulations are carried using a LAMMPS program. We parameterized our CG model using the radial distribution functions from atomistic simulations of short MC oligomers, extrapolating the results to long chains. We used dissociation free energy to validate our CG model against the atomistic model. The CG model captured the effects of monomer substitution type and temperature from the atomistic simulations. We applied this CG model to simulate single chains up to 1000 monomers long and obtained persistence lengths that are close to those determined from experiment. We observed the chain collapse transition for random copolymer at 600 monomers long at 50C. The chain collapsed into a stable ring structure with outer diameter around 14nm, which appears to be a precursor to the fibril structure observed in the methylcellulose gel observed by Lodge et al. in the recent studies. Our CG model can be extended to other MC derivatives for studying the interaction between these polymers and small molecules, such as hydrophobic drugs.

  16. Absolute dose calculations for Monte Carlo simulations of radiotherapy beams.

    PubMed

    Popescu, I A; Shaw, C P; Zavgorodni, S F; Beckham, W A

    2005-07-21

    Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 x 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 x 10(13) +/- 1.0% electrons incident on the target and a total dose of 20.87 cGy +/- 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.

  17. Feasibility of using Geant4 Monte Carlo simulation for IMRT dose calculations for the Novalis Tx with a HD-120 multi-leaf collimator

    NASA Astrophysics Data System (ADS)

    Jung, Hyunuk; Shin, Jungsuk; Chung, Kwangzoo; Han, Youngyih; Kim, Jinsung; Choi, Doo Ho

    2015-05-01

    The aim of this study was to develop an independent dose verification system by using a Monte Carlo (MC) calculation method for intensity modulated radiation therapy (IMRT) conducted by using a Varian Novalis Tx (Varian Medical Systems, Palo Alto, CA, USA) equipped with a highdefinition multi-leaf collimator (HD-120 MLC). The Geant4 framework was used to implement a dose calculation system that accurately predicted the delivered dose. For this purpose, the Novalis Tx Linac head was modeled according to the specifications acquired from the manufacturer. Subsequently, MC simulations were performed by varying the mean energy, energy spread, and electron spot radius to determine optimum values of irradiation with 6-MV X-ray beams by using the Novalis Tx system. Computed percentage depth dose curves (PDDs) and lateral profiles were compared to the measurements obtained by using an ionization chamber (CC13). To validate the IMRT simulation by using the MC model we developed, we calculated a simple IMRT field and compared the result with the EBT3 film measurements in a water-equivalent solid phantom. Clinical cases, such as prostate cancer treatment plans, were then selected, and MC simulations were performed. The accuracy of the simulation was assessed against the EBT3 film measurements by using a gamma-index criterion. The optimal MC model parameters to specify the beam characteristics were a 6.8-MeV mean energy, a 0.5-MeV energy spread, and a 3-mm electron radius. The accuracy of these parameters was determined by comparison of MC simulations with measurements. The PDDs and the lateral profiles of the MC simulation deviated from the measurements by 1% and 2%, respectively, on average. The computed simple MLC fields agreed with the EBT3 measurements with a 95% passing rate with 3%/3-mm gamma-index criterion. Additionally, in applying our model to clinical IMRT plans, we found that the MC calculations and the EBT3 measurements agreed well with a passing rate of greater than 95% on average with a 3%/3-mm gamma-index criterion. In summary, the Novalis Tx Linac head equipped with a HD-120 MLC was successfully modeled by using a Geant4 platform, and the accuracy of the Geant4 platform was successfully validated by comparisons with measurements. The MC model we have developed can be a useful tool for pretreatment quality assurance of IMRT plans and for commissioning of radiotherapy treatment planning.

  18. "First-principles" kinetic Monte Carlo simulations revisited: CO oxidation over RuO2 (110).

    PubMed

    Hess, Franziska; Farkas, Attila; Seitsonen, Ari P; Over, Herbert

    2012-03-15

    First principles-based kinetic Monte Carlo (kMC) simulations are performed for the CO oxidation on RuO(2) (110) under steady-state reaction conditions. The simulations include a set of elementary reaction steps with activation energies taken from three different ab initio density functional theory studies. Critical comparison of the simulation results reveals that already small variations in the activation energies lead to distinctly different reaction scenarios on the surface, even to the point where the dominating elementary reaction step is substituted by another one. For a critical assessment of the chosen energy parameters, it is not sufficient to compare kMC simulations only to experimental turnover frequency (TOF) as a function of the reactant feed ratio. More appropriate benchmarks for kMC simulations are the actual distribution of reactants on the catalyst's surface during steady-state reaction, as determined by in situ infrared spectroscopy and in situ scanning tunneling microscopy, and the temperature dependence of TOF in the from of Arrhenius plots. Copyright © 2012 Wiley Periodicals, Inc.

  19. SU-E-T-155: Calibration of Variable Longitudinal Strength 103Pd Brachytherapy Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J; Radtke, J; Micka, J

    Purpose: Brachytherapy sources with variable longitudinal strength (VLS) allow for a customized intensity along the length of the source. These have applications in focal brachytherapy treatments of prostate cancer where dose boosting can be achieved through modulation of intra-source strengths. This work focused on development of a calibration methodology for VLS sources based on measurements and Monte Carlo (MC) simulations of five 1 cm {sup 10} {sup 3}Pd sources each containing four regions of variable {sup 103}Pd strength. Methods: The air-kerma strengths of the sources were measured with a variable-aperture free-air chamber (VAFAC). Source strengths were also measured using amore » well chamber. The in-air azimuthal and polar anisotropy of the sources were measured by rotating them in front of a NaI scintillation detector and were calculated with MC simulations. Azimuthal anisotropy results were normalized to their mean intensity values. Polar anisotropy results were normalized to their average transverse axis intensity values. The relative longitudinal strengths of the sources were measured via on-contact irradiations with radiochromic film, and were calculated with MC simulations. Results: The variable {sup 103}Pd loading of the sources was validated by VAFAC and well chamber measurements. Ratios of VAFAC air-kerma strengths and well chamber responses were within ±1.3% for all sources. Azimuthal anisotropy results indicated that ≥95% of the normalized values for all sources were within ±1.7% of the mean values. Polar anisotropy results indicated variations within ±0.3% for a ±7.6° angular region with respect to the source transverse axis. Locations and intensities of the {sup 103}Pd regions were validated by radiochromic film measurements and MC simulations. Conclusion: The calibration methodology developed in this work confirms that the VLS sources investigated have a high level of polar uniformity, and that the strength and longitudinal intensity can be verified experimentally and through MC simulations. {sup 103}Pd sources were provided by CivaTech Oncology, Inc.« less

  20. The effect of linear spring number at side load of McPherson suspension in electric city car

    NASA Astrophysics Data System (ADS)

    Budi, Sigit Setijo; Suprihadi, Agus; Makhrojan, Agus; Ismail, Rifky; Jamari, J.

    2017-01-01

    The function of the spring suspension on Mc Pherson type is to control vehicle stability and increase ride convenience although having tendencies of side load presence. The purpose of this study is to obtain simulation results of Mc Pherson suspension spring in the electric city car by using the finite element method and determining the side load that appears on the spring suspension. This research is conducted in several stages; they are linear spring designing models with various spring coil and spring suspension modeling using FEM software. Suspension spring is compressed in the vertical direction (z-axis) and at the upper part of the suspension springs will be seen the force that arises towards the x, y, and z-axis to simulate the side load arising on the upper part of the spring. The results of FEM simulation that the side load on the spring toward the x and y-axis which the value gets close to zero is the most stable spring.

  1. LES of Temporally Evolving Mixing Layers by an Eighth-Order Filter Scheme

    NASA Technical Reports Server (NTRS)

    Hadjadj, A; Yee, H. C.; Sjogreen, B.

    2011-01-01

    An eighth-order filter method for a wide range of compressible flow speeds (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) are employed for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) and Reynolds numbers. The high order filter method is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The value of Mc considered is for the TML range from the quasi-incompressible regime to the highly compressible supersonic regime. The three main characteristics of compressible TML (the self similarity property, compressibility effects and the presence of large-scale structure with shocklets for high Mc) are considered for the LES study. The LES results using the same scheme parameters for all studied cases agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002).

  2. Advanced techniques for mitigating the effects of temporal distortions in human in the loop control systems

    NASA Astrophysics Data System (ADS)

    Guo, Liwen

    The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, the simulation transport delay remains a problem. Because of the limitations shown in the three prominent existing delay compensators---the lead/lag filter, the McFarland compensator and the Sobiski/Cardullo predictor---new approaches of compensating the transport delay in a flight simulator have been developed. The first novel compensator is the adaptive predictor making use of the Kalman filter algorithm in a unique manner so that the predictor can provide accurately the desired amount of prediction, significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors it illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator's control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Piloted simulation tests were conducted for assessing the effectiveness of the two novel compensators in comparison to the McFarland predictor and no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. Four metrics---the glide slope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating on the handling qualities---were employed for the analyses. The overall analyses show that while the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator, the state space predictor is fairly superior for short delay and significantly superior for long delay to the McFarland compensator. The state space predictor also achieves better compensation than the adaptive predictor. The results of the evaluation on the effectiveness of these predictors in the piloted tests agree with those in the theoretical offline tests conducted with the recorded simulation aircraft states.

  3. A comparative study of history-based versus vectorized Monte Carlo methods in the GPU/CUDA environment for a simple neutron eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.

    2014-06-01

    For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.

  4. Interfacing MCNPX and McStas for simulation of neutron transport

    NASA Astrophysics Data System (ADS)

    Klinkby, Esben; Lauritzen, Bent; Nonbøl, Erik; Kjær Willendrup, Peter; Filges, Uwe; Wohlmuther, Michael; Gallmeier, Franz X.

    2013-02-01

    Simulations of target-moderator-reflector system at spallation sources are conventionally carried out using Monte Carlo codes such as MCNPX (Waters et al., 2007 [1]) or FLUKA (Battistoni et al., 2007; Ferrari et al., 2005 [2,3]) whereas simulations of neutron transport from the moderator and the instrument response are performed by neutron ray tracing codes such as McStas (Lefmann and Nielsen, 1999; Willendrup et al., 2004, 2011a,b [4-7]). The coupling between the two simulation suites typically consists of providing analytical fits of MCNPX neutron spectra to McStas. This method is generally successful but has limitations, as it e.g. does not allow for re-entry of neutrons into the MCNPX regime. Previous work to resolve such shortcomings includes the introduction of McStas inspired supermirrors in MCNPX. In the present paper different approaches to interface MCNPX and McStas are presented and applied to a simple test case. The direct coupling between MCNPX and McStas allows for more accurate simulations of e.g. complex moderator geometries, backgrounds, interference between beam-lines as well as shielding requirements along the neutron guides.

  5. A medical image-based graphical platform -- features, applications and relevance for brachytherapy.

    PubMed

    Fonseca, Gabriel P; Reniers, Brigitte; Landry, Guillaume; White, Shane; Bellezzo, Murillo; Antunes, Paula C G; de Sales, Camila P; Welteman, Eduardo; Yoriyaz, Hélio; Verhaegen, Frank

    2014-01-01

    Brachytherapy dose calculation is commonly performed using the Task Group-No 43 Report-Updated protocol (TG-43U1) formalism. Recently, a more accurate approach has been proposed that can handle tissue composition, tissue density, body shape, applicator geometry, and dose reporting either in media or water. Some model-based dose calculation algorithms are based on Monte Carlo (MC) simulations. This work presents a software platform capable of processing medical images and treatment plans, and preparing the required input data for MC simulations. The A Medical Image-based Graphical platfOrm-Brachytherapy module (AMIGOBrachy) is a user interface, coupled to the MCNP6 MC code, for absorbed dose calculations. The AMIGOBrachy was first validated in water for a high-dose-rate (192)Ir source. Next, dose distributions were validated in uniform phantoms consisting of different materials. Finally, dose distributions were obtained in patient geometries. Results were compared against a treatment planning system including a linear Boltzmann transport equation (LBTE) solver capable of handling nonwater heterogeneities. The TG-43U1 source parameters are in good agreement with literature with more than 90% of anisotropy values within 1%. No significant dependence on the tissue composition was observed comparing MC results against an LBTE solver. Clinical cases showed differences up to 25%, when comparing MC results against TG-43U1. About 92% of the voxels exhibited dose differences lower than 2% when comparing MC results against an LBTE solver. The AMIGOBrachy can improve the accuracy of the TG-43U1 dose calculation by using a more accurate MC dose calculation algorithm. The AMIGOBrachy can be incorporated in clinical practice via a user-friendly graphical interface. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  6. spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains

    NASA Astrophysics Data System (ADS)

    Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo

    2016-09-01

    The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.

  7. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  8. Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object

    NASA Astrophysics Data System (ADS)

    Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jeanpierre; Crespin, Sylvain

    2011-03-01

    Last year, we demonstrated the feasibility of a new method to perform dosimetric quality control of Treatment Planning Systems in radiotherapy, this method is based on Monte-Carlo simulations and uses anatomical Digital Test Objects (DTOs). The pelvic DTO was used in order to assess this new method on an ECLIPSE VARIAN Treatment Planning System. Large dose variations were observed particularly in air and bone equivalent material. In this current work, we discuss the results of the previous paper and provide an explanation for observed dose differences, the VARIAN Eclipse (Anisotropic Analytical) algorithm was investigated. Monte Carlo simulations (MC) were performed with a PENELOPE code version 2003. To increase efficiency of MC simulations, we have used our parallelized version based on the standard MPI (Message Passing Interface). The parallel code has been run on a 32- processor SGI cluster. The study was carried out using pelvic DTOs and was performed for low- and high-energy photon beams (6 and 18MV) on 2100CD VARIAN linear accelerator. A square field (10x10 cm2) was used. Assuming the MC data as reference, χ index analyze was carried out. For this study, a distance to agreement (DTA) was set to 7mm while the dose difference was set to 5% as recommended in the TRS-430 and TG-53 (on the beam axis in 3-D inhomogeneities). When using Monte Carlo PENELOPE, the absorbed dose is computed to the medium, however the TPS computes dose to water. We have used the method described by Siebers et al. based on Bragg-Gray cavity theory to convert MC simulated dose to medium to dose to water. Results show a strong consistency between ECLIPSE and MC calculations on the beam axis.

  9. Absolute dose calculations for Monte Carlo simulations of radiotherapy beams

    NASA Astrophysics Data System (ADS)

    Popescu, I. A.; Shaw, C. P.; Zavgorodni, S. F.; Beckham, W. A.

    2005-07-01

    Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 × 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 × 1013 ± 1.0% electrons incident on the target and a total dose of 20.87 cGy ± 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.

  10. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  11. Pseudo hard-sphere potential for use in continuous molecular-dynamics simulation of spherical and chain molecules

    NASA Astrophysics Data System (ADS)

    Jover, J.; Haslam, A. J.; Galindo, A.; Jackson, G.; Müller, E. A.

    2012-10-01

    We present a continuous pseudo-hard-sphere potential based on a cut-and-shifted Mie (generalized Lennard-Jones) potential with exponents (50, 49). Using this potential one can mimic the volumetric, structural, and dynamic properties of the discontinuous hard-sphere potential over the whole fluid range. The continuous pseudo potential has the advantage that it may be incorporated directly into off-the-shelf molecular-dynamics code, allowing the user to capitalise on existing hardware and software advances. Simulation results for the compressibility factor of the fluid and solid phases of our pseudo hard spheres are presented and compared both to the Carnahan-Starling equation of state of the fluid and published data, the differences being indistinguishable within simulation uncertainty. The specific form of the potential is employed to simulate flexible chains formed from these pseudo hard spheres at contact (pearl-necklace model) for mc = 4, 5, 7, 8, 16, 20, 100, 201, and 500 monomer segments. The compressibility factor of the chains per unit of monomer, mc, approaches a limiting value at reasonably small values, mc < 50, as predicted by Wertheim's first order thermodynamic perturbation theory. Simulation results are also presented for highly asymmetric mixtures of pseudo hard spheres, with diameter ratios of 3:1, 5:1, 20:1 over the whole composition range.

  12. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  13. SU-G-JeP2-15: Proton Beam Behavior in the Presence of Realistic Magnet Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, D M; Wachowicz, K; Fallone, B G

    2016-06-15

    Purpose: To investigate the effects of magnetic fields on proton therapy beams for integration with MRI. Methods: 3D magnetic fields from an open-bore superconducting MRI model (previously developed by our group) and 3D magnetic fields from an in-house gradient coil design were applied to various mono energetic proton pencil beam (80MeV to 250MeV) simulations. In all simulations, the z-axis of the simulation geometry coincided with the direction of the B0 field and magnet isocentre. In each simulation, the initial beam trajectory was varied. The first set of simulations performed was based on analytic magnetic force equations (analytic simulations), which couldmore » be rapidly calculated yet were limited to propagating proton beams in vacuum. The second set is full Monte Carlo (MC) simulations, which used GEANT4 MC toolkit. Metrics such as the beam position and dose profiles were extracted. Comparisons between the cases with and without magnetic fields present were made. Results: The analytic simulations served as verification checks for the MC simulations when the same simulation geometries were used. The results of the analytic simulations agreed with the MC simulations performed in vacuum. The presence of the MRI’s static magnetic field causes proton pencil beams to follow a slight helical trajectory when there were some initial off-axis components. The 80MeV, 150MeV, and 250MeV proton beams rotated by 4.9o, 3.6o, and 2.8o, respectively, when they reached z=0cm. The deflections caused by gradient coils’ magnetic fields show spatially invariant patterns with a maximum range of 0.5mm at z=0cm. Conclusion: This investigation reveals that both the MRI’s B0 and gradient magnetic fields can cause small but observable deflections of proton beams at energies studied. The MRI’s static field caused a rotation of the beam while the gradient coils’ fields effects were spatially invariant. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  14. Dose and scatter characteristics of a novel cone beam CT system for musculoskeletal extremities

    NASA Astrophysics Data System (ADS)

    Zbijewski, W.; Sisniega, A.; Vaquero, J. J.; Muhit, A.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Carrino, J. A.; Siewerdsen, J. H.

    2012-03-01

    A novel cone-beam CT (CBCT) system has been developed with promising capabilities for musculoskeletal imaging (e.g., weight-bearing extremities and combined radiographic / volumetric imaging). The prototype system demonstrates diagnostic-quality imaging performance, while the compact geometry and short scan orbit raise new considerations for scatter management and dose characterization that challenge conventional methods. The compact geometry leads to elevated, heterogeneous x-ray scatter distributions - even for small anatomical sites (e.g., knee or wrist), and the short scan orbit results in a non-uniform dose distribution. These complex dose and scatter distributions were investigated via experimental measurements and GPU-accelerated Monte Carlo (MC) simulation. The combination provided a powerful basis for characterizing dose distributions in patient-specific anatomy, investigating the benefits of an antiscatter grid, and examining distinct contributions of coherent and incoherent scatter in artifact correction. Measurements with a 16 cm CTDI phantom show that the dose from the short-scan orbit (0.09 mGy/mAs at isocenter) varies from 0.16 to 0.05 mGy/mAs at various locations on the periphery (all obtained at 80 kVp). MC estimation agreed with dose measurements within 10-15%. Dose distribution in patient-specific anatomy was computed with MC, confirming such heterogeneity and highlighting the elevated energy deposition in bone (factor of ~5-10) compared to soft-tissue. Scatter-to-primary ratio (SPR) up to ~1.5-2 was evident in some regions of the knee. A 10:1 antiscatter grid was found earlier to result in significant improvement in soft-tissue imaging performance without increase in dose. The results of MC simulations elucidated the mechanism behind scatter reduction in the presence of a grid. A ~3-fold reduction in average SPR was found in the MC simulations; however, a linear grid was found to impart additional heterogeneity in the scatter distribution, mainly due to the increase in the contribution of coherent scatter with increased spatial variation. Scatter correction using MC-generated scatter distributions demonstrated significant improvement in cupping and streaks. Physical experimentation combined with GPU-accelerated MC simulation provided a sophisticated, yet practical approach in identifying low-dose acquisition techniques, optimizing scatter correction methods, and evaluating patientspecific dose.

  15. Oxidation of a new Biogenic VOC: Chamber Studies of the Atmospheric Chemistry of Methyl Chavicol

    NASA Astrophysics Data System (ADS)

    Bloss, William; Alam, Mohammed; Adbul Raheem, Modinah; Rickard, Andrew; Hamilton, Jacqui; Pereira, Kelly; Camredon, Marie; Munoz, Amalia; Vazquez, Monica; Vera, Teresa; Rodenas, Mila

    2013-04-01

    The oxidation of volatile organic compounds (VOCs) leads to formation of ozone and SOA, with consequences for air quality, health, crop yields, atmospheric chemistry and radiative transfer. Recent observations have identified Methyl Chavicol ("MC": Estragole; 1-allyl-4-methoxybenzene, C10H12O) as a major BVOC above pine forests in the USA, and oil palm plantations in Malaysian Borneo. Palm oil cultivation, and hence MC emissions, may be expected to increase with societal food and bio fuel demand. We present the results of a series of simulation chamber experiments to assess the atmospheric fate of MC. Experiments were performed in the EUPHORE facility, monitoring stable product species, radical intermediates, and aerosol production and composition. We determine rate constants for reaction of MC with OH and O3, and ozonolysis radical yields. Stable product measurements (FTIR, PTRMS, GC-SPME) are used to determine the yields of stable products formed from OH- and O3- initiated oxidation, and to develop an understanding of the initial stages of the MC degradation chemistry. A surrogate mechanism approach is used to simulate MC degradation within the MCM, evaluated in terms of ozone production measured in the chamber experiments, and applied to quantify the role of MC in the real atmosphere.

  16. Development of a Multi-Channel Piezoelectric Acoustic Sensor Based on an Artificial Basilar Membrane

    PubMed Central

    Jung, Youngdo; Kwak, Jun-Hyuk; Lee, Young Hwa; Kim, Wan Doo; Hur, Shin

    2014-01-01

    In this research, we have developed a multi-channel piezoelectric acoustic sensor (McPAS) that mimics the function of the natural basilar membrane capable of separating incoming acoustic signals mechanically by their frequency and generating corresponding electrical signals. The McPAS operates without an external energy source and signal processing unit with a vibrating piezoelectric thin film membrane. The shape of the vibrating membrane was chosen to be trapezoidal such that different locations of membrane have different local resonance frequencies. The length of the membrane is 28 mm and the width of the membrane varies from 1 mm to 8 mm. Multiphysics finite element analysis (FEA) was carried out to predict and design the mechanical behaviors and piezoelectric response of the McPAS model. The designed McPAS was fabricated with a MEMS fabrication process based on the simulated results. The fabricated device was tested with a mouth simulator to measure its mechanical and piezoelectrical frequency response with a laser Doppler vibrometer and acoustic signal analyzer. The experimental results show that the as fabricated McPAS can successfully separate incoming acoustic signals within the 2.5 kHz–13.5 kHz range and the maximum electrical signal output upon acoustic signal input of 94 dBSPL was 6.33 mVpp. The performance of the fabricated McPAS coincided well with the designed parameters. PMID:24361926

  17. Conceptual Modeling (CM) for Military Modeling and Simulation (M&S) (Modelisation conceptuelle (MC) pour la modelisation et la simulation (M&S) militaires)

    DTIC Science & Technology

    2012-07-01

    du monde de la modélisation et de la simulation et lui fournir des directives de mise en œuvre ; et fournir des ...définition ; rapports avec les normes ; spécification de procédure de gestion de la MC ; spécification d’artefact de MC. Considérations importantes...utilisant la présente directive comme référence. • Les VV&A (vérification, validation et acceptation) des MC doivent faire partie intégrante du

  18. Theoretical study of the ammonia nitridation rate on an Fe (100) surface: a combined density functional theory and kinetic Monte Carlo study.

    PubMed

    Yeo, Sang Chul; Lo, Yu Chieh; Li, Ju; Lee, Hyuck Mo

    2014-10-07

    Ammonia (NH3) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (Eb) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (Eb) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH3 nitridation rate on the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH3 nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH3 nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH3 nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.

  19. Binding, Thermodynamics, and Selectivity of a Non-peptide Antagonist to the Melanocortin-4 Receptor

    PubMed Central

    Saleh, Noureldin; Kleinau, Gunnar; Heyder, Nicolas; Clark, Timothy; Hildebrand, Peter W.; Scheerer, Patrick

    2018-01-01

    The melanocortin-4 receptor (MC4R) is a potential drug target for treatment of obesity, anxiety, depression, and sexual dysfunction. Crystal structures for MC4R are not yet available, which has hindered successful structure-based drug design. Using microsecond-scale molecular-dynamics simulations, we have investigated selective binding of the non-peptide antagonist MCL0129 to a homology model of human MC4R (hMC4R). This approach revealed that, at the end of a multi-step binding process, MCL0129 spontaneously adopts a binding mode in which it blocks the agonistic-binding site. This binding mode was confirmed in subsequent metadynamics simulations, which gave an affinity for human hMC4R that matches the experimentally determined value. Extending our simulations of MCL0129 binding to hMC1R and hMC3R, we find that receptor subtype selectivity for hMC4R depends on few amino acids located in various structural elements of the receptor. These insights may support rational drug design targeting the melanocortin systems.

  20. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    NASA Astrophysics Data System (ADS)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2008-02-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.

  1. Fast calculation of tissue optical properties using MC and the experimental evaluation for diagnosis of cervical cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Zhou, Xiaoqing; Qin, Zhuanping; Zhao, Huijuan

    2011-02-01

    This article aims at the development of the fast inverse Monte Carlo (MC) simulation for the reconstruction of optical properties (absorption coefficient μs and scattering coefficient μs) of cylindrical tissue, such as a cervix, from the measurement of near infrared diffuse light on frequency domain. Frequency domain information (amplitude and phase) is extracted from the time domain MC with a modified method. To shorten the computation time in reconstruction of optical properties, efficient and fast forward MC has to be achieved. To do this, firstly, databases of the frequency-domain information under a range of μa and μs were pre-built by combining MC simulation with Lambert-Beer's law. Then, a double polynomial model was adopted to quickly obtain the frequency-domain information in any optical properties. Based on the fast forward MC, the optical properties can be quickly obtained in a nonlinear optimization scheme. Reconstruction resulting from simulated data showed that the developed inverse MC method has the advantages in both the reconstruction accuracy and computation time. The relative errors in reconstruction of the μs and μs are less than +/-6% and +/-12% respectively, while another coefficient (μs or μs) is in a fixed value. When both μs and μs are unknown, the relative errors in reconstruction of the reduced scattering coefficient and absorption coefficient are mainly less than +/-10% in range of 45< μs <80 cm-1 and 0.25< a μ <0.55 cm-1. With the rapid reconstruction strategy developed in this article the computation time for reconstructing one set of the optical properties is less than 0.5 second. Endoscopic measurement on two tubular solid phantoms were also carried out to evaluate the system and the inversion scheme. The results demonstrated that less than 20% relative error can be achieved.

  2. Polarimetric Radar Characteristics of Simulated and Observed Intense Convection Between Continental and Maritime Environment

    NASA Astrophysics Data System (ADS)

    Matsui, T.; Dolan, B.; Tao, W. K.; Rutledge, S. A.; Iguchi, T.; Barnum, J. I.; Lang, S. E.

    2017-12-01

    This study presents polarimetric radar characteristics of intense convective cores derived from observations as well as a polarimetric-radar simulator from cloud resolving model (CRM) simulations from Midlatitude Continental Convective Clouds Experiment (MC3E) May 23 case over Oklahoma and a Tropical Warm Pool-International Cloud Experiment (TWP-ICE) Jan 23 case over Darwin, Australia to highlight the contrast between continental and maritime convection. The POLArimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a state-of-art T-matrix-Mueller-Matrix-based polarimetric radar simulator that can generate synthetic polarimetric radar signals (reflectivity, differential reflectivity, specific differential phase, co-polar correlation) as well as synthetic radar retrievals (precipitation, hydrometeor type, updraft velocity) through the consistent treatment of cloud microphysics and dynamics from CRMs. The Weather Research and Forecasting (WRF) model is configured to simulate continental and maritime severe storms over the MC3E and TWP-ICE domains with the Goddard bulk 4ICE single-moment microphysics and HUCM spectra-bin microphysics. Various statistical diagrams of polarimetric radar signals, hydrometeor types, updraft velocity, and precipitation intensity are investigated for convective and stratiform precipitation regimes and directly compared between MC3E and TWP-ICE cases. The result shows MC3E convection is characterized with very strong reflectivity (up to 60dBZ), slight negative differential reflectivity (-0.8 0 dB) and near-zero specific differential phase above the freezing levels. On the other hand, TWP-ICE convection shows strong reflectivity (up to 50dBZ), slight positive differential reflectivity (0 1.0 dB) and differential phase (0 0.8 dB/km). Hydrometeor IDentification (HID) algorithm from the observation and simulations detect hail-dominant convection core in MC3E, while graupel-dominant convection core in TWP-ICE. This land-ocean contrast agrees with the previous studies using the radar and radiometer signals from TRMM satellite climatology associated with warm-cloud depths and vertical structure of buoyancy.

  3. Monte Carlo simulations to replace film dosimetry in IMRT verification.

    PubMed

    Goetzfried, Thomas; Rickhey, Mark; Treutwein, Marius; Koelbl, Oliver; Bogner, Ludwig

    2011-01-01

    Patient-specific verification of intensity-modulated radiation therapy (IMRT) plans can be done by dosimetric measurements or by independent dose or monitor unit calculations. The aim of this study was the clinical evaluation of IMRT verification based on a fast Monte Carlo (MC) program with regard to possible benefits compared to commonly used film dosimetry. 25 head-and-neck IMRT plans were recalculated by a pencil beam based treatment planning system (TPS) using an appropriate quality assurance (QA) phantom. All plans were verified both by film and diode dosimetry and compared to MC simulations. The irradiated films, the results of diode measurements and the computed dose distributions were evaluated, and the data were compared on the basis of gamma maps and dose-difference histograms. Average deviations in the high-dose region between diode measurements and point dose calculations performed with the TPS and MC program were 0.7 ± 2.7% and 1.2 ± 3.1%, respectively. For film measurements, the mean gamma values with 3% dose difference and 3mm distance-to-agreement were 0.74 ± 0.28 (TPS as reference) with dose deviations up to 10%. Corresponding values were significantly reduced to 0.34 ± 0.09 for MC dose calculation. The total time needed for both verification procedures is comparable, however, by far less labor intensive in the case of MC simulations. The presented study showed that independent dose calculation verification of IMRT plans with a fast MC program has the potential to eclipse film dosimetry more and more in the near future. Thus, the linac-specific QA part will necessarily become more important. In combination with MC simulations and due to the simple set-up, point-dose measurements for dosimetric plausibility checks are recommended at least in the IMRT introduction phase. Copyright © 2010. Published by Elsevier GmbH.

  4. Gravity affects the responsiveness of Runx2 to 1, 25-dihydroxyvitamin D3 (VD3)

    NASA Astrophysics Data System (ADS)

    Guo, Feima; Dai, Zhongquan; Wu, Feng; Liu, Zhaoxia; Tan, Yingjun; Wan, Yumin; Shang, Peng; Li, Yinghui

    2013-03-01

    Bone loss resulting from spaceflight is mainly caused by decreased bone formation, and decreased osteoblast proliferation and differentiation. Transcription factor Runx2 plays an important role in osteoblast differentiation and function by responding to microenvironment changes including cytokine and mechanical factors. The effects of 1, 25-dihydroxyvitamin D3 (VD3) on Runx2 in terms of mechanical competence is far less clear. This study describes how gravity affects the response of Runx2 to VD3. A MC3T3-6OSE2-Luc osteoblast model was constructed in which the activity of Runx2 was reflected by reporter luciferase activity identifed by bone-related cytokines. The results showed that luciferase activity in MC3T3-6OSE2-Luc cells transfected with Runx2 was twice that of the vacant vector. Alkaline phosphatase (ALP) activity was increased in MC3T3-6OSE2-Luc cells by different concentrations of IGF-I and BMP2. MC3T3-6OSE2-Luc cells were cultured under simulated microgravity or centrifuge with or without VD3. In simulated microgravity, luciferase activity was decreased after 48 h of clinorotation culture, but increased in the centrifuge culture. Luciferase activity was increased after VD3 treatment in normal conditions and simulated microgravity, the increase in luciferase activity in simulated microgravity was lower than that in the 1 g condition when simultaneously treated with VD3 and higher than that in the centrifuge condition. Co-immunoprecipitation showed that the interaction between the VD3 receptor (VDR) and Runx2 was decreased by simulated microgravity, but increased by centrifugation. From these results, we conclude that gravity affects the response of Runx2 to VD3 which results from an alteration in the interaction between VDR and Runx2 under different gravity conditions.

  5. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Stayman, J

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less

  6. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  7. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  8. Comparison of Two Stochastic Daily Rainfall Models and their Ability to Preserve Multi-year Rainfall Variability

    NASA Astrophysics Data System (ADS)

    Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka

    2016-04-01

    Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.

  9. Equation of state and Helmholtz free energy for the atomic system of the repulsive Lennard-Jones particles.

    PubMed

    Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid

    2017-12-07

    Simple and accurate expressions are presented for the equation of state (EOS) and absolute Helmholtz free energy of a system composed of simple atomic particles interacting through the repulsive Lennard-Jones potential model in the fluid and solid phases. The introduced EOS has 17 and 22 coefficients for fluid and solid phases, respectively, which are regressed to the Monte Carlo (MC) simulation data over the reduced temperature range of 0.6≤T * ≤6.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. The average absolute relative percent deviation in fitting the EOS parameters to the MC data is 0.06 and 0.14 for the fluid and solid phases, respectively. The thermodynamic integration method is used to calculate the free energy using the MC simulation results. The Helmholtz free energy of the ideal gas is employed as the reference state for the fluid phase. For the solid phase, the values of the free energy at the reduced density equivalent to the close-packed of a hard sphere are used as the reference state. To check the validity of the predicted values of the Helmholtz free energy, the Widom particle insertion method and the Einstein crystal technique of Frenkel and Ladd are employed. The results obtained from the MC simulation approaches are well agreed to the EOS results, which show that the proposed model can reliably be utilized in the framework of thermodynamic theories.

  10. Equation of state and Helmholtz free energy for the atomic system of the repulsive Lennard-Jones particles

    NASA Astrophysics Data System (ADS)

    Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid

    2017-12-01

    Simple and accurate expressions are presented for the equation of state (EOS) and absolute Helmholtz free energy of a system composed of simple atomic particles interacting through the repulsive Lennard-Jones potential model in the fluid and solid phases. The introduced EOS has 17 and 22 coefficients for fluid and solid phases, respectively, which are regressed to the Monte Carlo (MC) simulation data over the reduced temperature range of 0.6 ≤T*≤6.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. The average absolute relative percent deviation in fitting the EOS parameters to the MC data is 0.06 and 0.14 for the fluid and solid phases, respectively. The thermodynamic integration method is used to calculate the free energy using the MC simulation results. The Helmholtz free energy of the ideal gas is employed as the reference state for the fluid phase. For the solid phase, the values of the free energy at the reduced density equivalent to the close-packed of a hard sphere are used as the reference state. To check the validity of the predicted values of the Helmholtz free energy, the Widom particle insertion method and the Einstein crystal technique of Frenkel and Ladd are employed. The results obtained from the MC simulation approaches are well agreed to the EOS results, which show that the proposed model can reliably be utilized in the framework of thermodynamic theories.

  11. Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.

    PubMed

    Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood

    2016-01-01

    Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.

  12. Efficient gradient-based Monte Carlo simulation of materials: Applications to amorphous Si and Fe and Ni clusters

    NASA Astrophysics Data System (ADS)

    Limbu, Dil; Biswas, Parthapratim

    We present a simple and efficient Monte-Carlo (MC) simulation of Iron (Fe) and Nickel (Ni) clusters with N =5-100 and amorphous Silicon (a-Si) starting from a random configuration. Using Sutton-Chen and Finnis-Sinclair potentials for Ni (in fcc lattice) and Fe (in bcc lattice), and Stillinger-Weber potential for a-Si, respectively, the total energy of the system is optimized by employing MC moves that include both the stochastic nature of MC simulations and the gradient of the potential function. For both iron and nickel clusters, the energy of the configurations is found to be very close to the values listed in the Cambridge Cluster Database, whereas the maximum force on each cluster is found to be much lower than the corresponding value obtained from the optimized structural configurations reported in the database. An extension of the method to model the amorphous state of Si is presented and the results are compared with experimental data and those obtained from other simulation methods. The work is partially supported by the NSF under Grant Number DMR 1507166.

  13. Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less

  14. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  15. New simulation model of multicomponent crystal growth and inhibition.

    PubMed

    Wathen, Brent; Kuiper, Michael; Walker, Virginia; Jia, Zongchao

    2004-04-02

    We review a novel computational model for the study of crystal structures both on their own and in conjunction with inhibitor molecules. The model advances existing Monte Carlo (MC) simulation techniques by extending them from modeling 3D crystal surface patches to modeling entire 3D crystals, and by including the use of "complex" multicomponent molecules within the simulations. These advances makes it possible to incorporate the 3D shape and non-uniform surface properties of inhibitors into simulations, and to study what effect these inhibitor properties have on the growth of whole crystals containing up to tens of millions of molecules. The application of this extended MC model to the study of antifreeze proteins (AFPs) and their effects on ice formation is reported, including the success of the technique in achieving AFP-induced ice-growth inhibition with concurrent changes to ice morphology that mimic experimental results. Simulations of ice-growth inhibition suggest that the degree of inhibition afforded by an AFP is a function of its ice-binding position relative to the underlying anisotropic growth pattern of ice. This extended MC technique is applicable to other crystal and crystal-inhibitor systems, including more complex crystal systems such as clathrates.

  16. An adaptive bias - hybrid MD/kMC algorithm for protein folding and aggregation.

    PubMed

    Peter, Emanuel K; Shea, Joan-Emma

    2017-07-05

    In this paper, we present a novel hybrid Molecular Dynamics/kinetic Monte Carlo (MD/kMC) algorithm and apply it to protein folding and aggregation in explicit solvent. The new algorithm uses a dynamical definition of biases throughout the MD component of the simulation, normalized in relation to the unbiased forces. The algorithm guarantees sampling of the underlying ensemble in dependency of one average linear coupling factor 〈α〉 τ . We test the validity of the kinetics in simulations of dialanine and compare dihedral transition kinetics with long-time MD-simulations. We find that for low 〈α〉 τ values, kinetics are in good quantitative agreement. In folding simulations of TrpCage and TrpZip4 in explicit solvent, we also find good quantitative agreement with experimental results and prior MD/kMC simulations. Finally, we apply our algorithm to study growth of the Alzheimer Amyloid Aβ 16-22 fibril by monomer addition. We observe two possible binding modes, one at the extremity of the fibril (elongation) and one on the surface of the fibril (lateral growth), on timescales ranging from ns to 8 μs.

  17. Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations

    DOE PAGES

    Radak, Brian K.; Roux, Benoît

    2016-10-07

    Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less

  18. Pseudo hard-sphere potential for use in continuous molecular-dynamics simulation of spherical and chain molecules.

    PubMed

    Jover, J; Haslam, A J; Galindo, A; Jackson, G; Müller, E A

    2012-10-14

    We present a continuous pseudo-hard-sphere potential based on a cut-and-shifted Mie (generalized Lennard-Jones) potential with exponents (50, 49). Using this potential one can mimic the volumetric, structural, and dynamic properties of the discontinuous hard-sphere potential over the whole fluid range. The continuous pseudo potential has the advantage that it may be incorporated directly into off-the-shelf molecular-dynamics code, allowing the user to capitalise on existing hardware and software advances. Simulation results for the compressibility factor of the fluid and solid phases of our pseudo hard spheres are presented and compared both to the Carnahan-Starling equation of state of the fluid and published data, the differences being indistinguishable within simulation uncertainty. The specific form of the potential is employed to simulate flexible chains formed from these pseudo hard spheres at contact (pearl-necklace model) for m(c) = 4, 5, 7, 8, 16, 20, 100, 201, and 500 monomer segments. The compressibility factor of the chains per unit of monomer, m(c), approaches a limiting value at reasonably small values, m(c) < 50, as predicted by Wertheim's first order thermodynamic perturbation theory. Simulation results are also presented for highly asymmetric mixtures of pseudo hard spheres, with diameter ratios of 3:1, 5:1, 20:1 over the whole composition range.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  20. SU-F-T-610: Comparison of Output Factors for Small Radiation Fields Used in SBRT Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, R; Eldib, A; Li, J

    2016-06-15

    Purpose: In order to fundamentally understand our previous dose verification results between measurements and calculations from treatment planning system (TPS) for SBRT plans for different sized targets, the goal of the present work was to compare output factors for small fields measured using EDR2 films with TPS and Monet Carlo (MC) simulations. Methods: 6MV beam was delivered to EDR2 films for each of the following field sizes; 1×1 cm{sup 2}, 1.5×1.5 cm{sup 2}, 2×2 cm{sup 2}, 3×3 cm{sup 2}, 4×4 cm{sup 2}, 5×5 cm{sup 2} and 10×10 cm{sup 2}. The films were developed in a film processer, then scanned withmore » a Vidar VXR-16 scanner and analyzed using RIT113 version 6.1. A standard calibration curve was obtained with the 6MV beam and was used to get absolute dose for measured field sizes. Similar plans for all fields sizes mentioned above were generated using Eclipse with the Analytical Anisotropic Algorithm. Similarly, MC simulations were carried out using the MCSIM, an in-house MC code for different field sizes. Output factors normalized to 10×10 cm{sup 2} reference field were calculated for different field sizes in all the three cases and compared. Results: For field sizes ranging from 1×1 cm{sup 2} to 2×2 cm{sup 2}, the differences in output factors between measurements (films), TPS and MC simulations were within 0.22%. For field sizes ranging from 3×3cm{sup 2} to 5×5cm{sup 2}, differences in output factors were within 0.10%. Conclusion: No clinically significant difference was obtained in output factors for different field sizes acquired from films, TPS and MC simulations. Our results showed that the output factors are predicted accurately from TPS when compared to the actual measurements and superior dose calculation Monte Carlo method. This study would help us in understanding our previously obtained dose verification results for small fields used in the SBRT treatment.« less

  1. Atomistic simulations of the effect of embedded hydrogen and helium on the tensile properties of monocrystalline and nanocrystalline tungsten

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Kecskes, Laszlo J.; Zhu, Kaigui; Wei, Qiuming

    2016-12-01

    Uniaxial tensile properties of monocrystalline tungsten (MC-W) and nanocrystalline tungsten (NC-W) with embedded hydrogen and helium atoms have been investigated using molecular dynamics (MD) simulations in the context of radiation damage evolution. Different strain rates have been imposed to investigate the strain rate sensitivity (SRS) of the samples. Results show that the plastic deformation processes of MC-W and NC-W are dominated by different mechanisms, namely dislocation-based for MC-W and grain boundary-based activities for NC-W, respectively. For MC-W, the SRS increases and a transition appears in the deformation mechanism with increasing embedded atom concentration. However, no obvious embedded atom concentration dependence of the SRS has been observed for NC-W. Instead, in the latter case, the embedded atoms facilitate GB sliding and intergranular fracture. Additionally, a strong strain enhanced He cluster growth has been observed. The corresponding underlying mechanisms are discussed.

  2. Application of MC1 to Wind Cave National Park: Lessons from a small-scale study: Chapter 8

    USGS Publications Warehouse

    King, David A.; Bachelet, Dominique M.; Symstad, Amy J.

    2015-01-01

    MC1 was designed for application to large regions that include a wide range in elevation and topography, thereby encompassing a broad range in climates and vegetation types. The authors applied the dynamic global vegetation model MC1 to Wind Cave National Park (WCNP) in the southern Black Hills of South Dakota, USA, on the ecotone between ponderosa pine forest to the northwest and mixed-grass prairie to the southeast. They calibrated MC1 to simulate adequate fire effects in the warmer southeastern parts of the park to ensure grasslands there, while allowing forests to grow to the northwest, and then simulated future vegetation with climate projections from three GCMs. The results suggest that fire frequency, as affected by climate and/or human intervention, may be more important than the direct effects of climate in determining the distribution of ponderosa pine in the Black Hills region, both historically and in the future.

  3. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir.

    PubMed

    Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-02-01

    The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.

  4. Entropic formulation for the protein folding process: Hydrophobic stability correlates with folding rates

    NASA Astrophysics Data System (ADS)

    Dal Molin, J. P.; Caliri, A.

    2018-01-01

    Here we focus on the conformational search for the native structure when it is ruled by the hydrophobic effect and steric specificities coming from amino acids. Our main tool of investigation is a 3D lattice model provided by a ten-letter alphabet, the stereochemical model. This minimalist model was conceived for Monte Carlo (MC) simulations when one keeps in mind the kinetic behavior of protein-like chains in solution. We have three central goals here. The first one is to characterize the folding time (τ) by two distinct sampling methods, so we present two sets of 103 MC simulations for a fast protein-like sequence. The resulting sets of characteristic folding times, τ and τq were obtained by the application of the standard Metropolis algorithm (MA), as well as by an enhanced algorithm (Mq A). The finding for τq shows two things: (i) the chain-solvent hydrophobic interactions {hk } plus a set of inter-residues steric constraints {ci,j } are able to emulate the conformational search for the native structure. For each one of the 103MC performed simulations, the target is always found within a finite time window; (ii) the ratio τq / τ ≅ 1 / 10 suggests that the effect of local thermal fluctuations, encompassed by the Tsallis weight, provides to the chain an innate efficiency to escape from energetic and steric traps. We performed additional MC simulations with variations of our design rule to attest this first result, both algorithms the MA and the Mq A were applied to a restricted set of targets, a physical insight is provided. Our second finding was obtained by a set of 600 independent MC simulations, only performed with the Mq A applied to an extended set of 200 representative targets, our native structures. The results show how structural patterns should modulate τq, which cover four orders of magnitude; this finding is our second goal. The third, and last result, was obtained with a special kind of simulation performed with the purpose to explore a possible connection between the hydrophobic component of protein stability and the native structural topology. We simulated those same 200 targets again with the Mq A, only. However, this time we evaluated the relative frequency {ϕq } in which each target visits its corresponding native structure along an appropriate simulation time. Due to the presence of the hydrophobic effect in our approach we obtained a strong correlation between the stability and the folding rate (R = 0 . 85). So, as faster a sequence found its target, as larger is the hydrophobic component of its stability. The strong correlation fulfills our last goal. This final finding suggests that the hydrophobic effect could not be a general stabilizing factor for proteins.

  5. Monte Carlo simulation tool for online treatment monitoring in hadrontherapy with in-beam PET: A patient study.

    PubMed

    Fiorina, E; Ferrero, V; Pennazio, F; Baroni, G; Battistoni, G; Belcari, N; Cerello, P; Camarlinghi, N; Ciocca, M; Del Guerra, A; Donetti, M; Ferrari, A; Giordanengo, S; Giraudo, G; Mairani, A; Morrocchi, M; Peroni, C; Rivetti, A; Da Rocha Rolo, M D; Rossi, S; Rosso, V; Sala, P; Sportelli, G; Tampellini, S; Valvo, F; Wheadon, R; Bisogni, M G

    2018-05-07

    Hadrontherapy is a method for treating cancer with very targeted dose distributions and enhanced radiobiological effects. To fully exploit these advantages, in vivo range monitoring systems are required. These devices measure, preferably during the treatment, the secondary radiation generated by the beam-tissue interactions. However, since correlation of the secondary radiation distribution with the dose is not straightforward, Monte Carlo (MC) simulations are very important for treatment quality assessment. The INSIDE project constructed an in-beam PET scanner to detect signals generated by the positron-emitting isotopes resulting from projectile-target fragmentation. In addition, a FLUKA-based simulation tool was developed to predict the corresponding reference PET images using a detailed scanner model. The INSIDE in-beam PET was used to monitor two consecutive proton treatment sessions on a patient at the Italian Center for Oncological Hadrontherapy (CNAO). The reconstructed PET images were updated every 10 s providing a near real-time quality assessment. By half-way through the treatment, the statistics of the measured PET images were already significant enough to be compared with the simulations with average differences in the activity range less than 2.5 mm along the beam direction. Without taking into account any preferential direction, differences within 1 mm were found. In this paper, the INSIDE MC simulation tool is described and the results of the first in vivo agreement evaluation are reported. These results have justified a clinical trial, in which the MC simulation tool will be used on a daily basis to study the compliance tolerances between the measured and simulated PET images. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Comparing stochastic proton interactions simulated using TOPAS-nBio to experimental data from fluorescent nuclear track detectors

    NASA Astrophysics Data System (ADS)

    Underwood, T. S. A.; Sung, W.; McFadden, C. H.; McMahon, S. J.; Hall, D. C.; McNamara, A. L.; Paganetti, H.; Sawakuchi, G. O.; Schuemann, J.

    2017-04-01

    Whilst Monte Carlo (MC) simulations of proton energy deposition have been well-validated at the macroscopic level, their microscopic validation remains lacking. Equally, no gold-standard yet exists for experimental metrology of individual proton tracks. In this work we compare the distributions of stochastic proton interactions simulated using the TOPAS-nBio MC platform against confocal microscope data for Al2O3:C,Mg fluorescent nuclear track detectors (FNTDs). We irradiated 8× 4× 0.5 mm3 FNTD chips inside a water phantom, positioned at seven positions along a pristine proton Bragg peak with a range in water of 12 cm. MC simulations were implemented in two stages: (1) using TOPAS to model the beam properties within a water phantom and (2) using TOPAS-nBio with Geant4-DNA physics to score particle interactions through a water surrogate of Al2O3:C,Mg. The measured median track integrated brightness (IB) was observed to be strongly correlated to both (i) voxelized track-averaged linear energy transfer (LET) and (ii) frequency mean microdosimetric lineal energy, \\overline{{{y}F}} , both simulated in pure water. Histograms of FNTD track IB were compared against TOPAS-nBio histograms of the number of terminal electrons per proton, scored in water with mass-density scaled to mimic Al2O3:C,Mg. Trends between exposure depths observed in TOPAS-nBio simulations were experimentally replicated in the study of FNTD track IB. Our results represent an important first step towards the experimental validation of MC simulations on the sub-cellular scale and suggest that FNTDs can enable experimental study of the microdosimetric properties of individual proton tracks.

  7. Comparing stochastic proton interactions simulated using TOPAS-nBio to experimental data from fluorescent nuclear track detectors.

    PubMed

    Underwood, T S A; Sung, W; McFadden, C H; McMahon, S J; Hall, D C; McNamara, A L; Paganetti, H; Sawakuchi, G O; Schuemann, J

    2017-04-21

    Whilst Monte Carlo (MC) simulations of proton energy deposition have been well-validated at the macroscopic level, their microscopic validation remains lacking. Equally, no gold-standard yet exists for experimental metrology of individual proton tracks. In this work we compare the distributions of stochastic proton interactions simulated using the TOPAS-nBio MC platform against confocal microscope data for Al 2 O 3 :C,Mg fluorescent nuclear track detectors (FNTDs). We irradiated [Formula: see text] mm 3 FNTD chips inside a water phantom, positioned at seven positions along a pristine proton Bragg peak with a range in water of 12 cm. MC simulations were implemented in two stages: (1) using TOPAS to model the beam properties within a water phantom and (2) using TOPAS-nBio with Geant4-DNA physics to score particle interactions through a water surrogate of Al 2 O 3 :C,Mg. The measured median track integrated brightness (IB) was observed to be strongly correlated to both (i) voxelized track-averaged linear energy transfer (LET) and (ii) frequency mean microdosimetric lineal energy, [Formula: see text], both simulated in pure water. Histograms of FNTD track IB were compared against TOPAS-nBio histograms of the number of terminal electrons per proton, scored in water with mass-density scaled to mimic Al 2 O 3 :C,Mg. Trends between exposure depths observed in TOPAS-nBio simulations were experimentally replicated in the study of FNTD track IB. Our results represent an important first step towards the experimental validation of MC simulations on the sub-cellular scale and suggest that FNTDs can enable experimental study of the microdosimetric properties of individual proton tracks.

  8. Relationship between 578-nm (copper vapor) laser beam geometry and heat distribution within biological tissues

    NASA Astrophysics Data System (ADS)

    Ilyasov, Ildar K.; Prikhodko, Constantin V.; Nevorotin, Alexey J.

    1995-01-01

    Monte Carlo (MC) simulation model and the thermoindicative tissue phantom were applied for evaluation of a depth of tissue necrosis (DTN) as a result of quasi-cw copper vapor laser (578 nm) irradiation. It has been shown that incident light focusing angle is essential for DTN. In particular, there was a significant rise in DTN parallel to elevation of this angle up to +20 degree(s)C and +5 degree(s)C for both the MC simulation and tissue phantom models, respectively, with no further increase in the necrosis depth above these angles. It is to be noted that the relationship between focusing angles and DTN values was apparently stronger for the real target compared to the MC-derived hypothetical one. To what extent these date are applicable for medical practice can be evaluated in animal models which would simulate laser-assisted therapy for PWS or related dermatologic lesions with converged 578 nm laser beams.

  9. SU-E-T-489: Quantum versus Classical Trajectory Monte Carlo Simulations of Low Energy Electron Transport.

    PubMed

    Thomson, R; Kawrakow, I

    2012-06-01

    Widely-used classical trajectory Monte Carlo simulations of low energy electron transport neglect the quantum nature of electrons; however, at sub-1 keV energies quantum effects have the potential to become significant. This work compares quantum and classical simulations within a simplified model of electron transport in water. Electron transport is modeled in water droplets using quantum mechanical (QM) and classical trajectory Monte Carlo (MC) methods. Water droplets are modeled as collections of point scatterers representing water molecules from which electrons may be isotropically scattered. The role of inelastic scattering is investigated by introducing absorption. QM calculations involve numerically solving a system of coupled equations for the electron wavefield incident on each scatterer. A minimum distance between scatterers is introduced to approximate structured water. The average QM water droplet incoherent cross section is compared with the MC cross section; a relative error (RE) on the MC results is computed. RE varies with electron energy, average and minimum distances between scatterers, and scattering amplitude. The mean free path is generally the relevant length scale for estimating RE. The introduction of a minimum distance between scatterers increases RE substantially (factors of 5 to 10), suggesting that the structure of water must be modeled for accurate simulations. Inelastic scattering does not improve agreement between QM and MC simulations: for the same magnitude of elastic scattering, the introduction of inelastic scattering increases RE. Droplet cross sections are sensitive to droplet size and shape; considerable variations in RE are observed with changing droplet size and shape. At sub-1 keV energies, quantum effects may become non-negligible for electron transport in condensed media. Electron transport is strongly affected by the structure of the medium. Inelastic scatter does not improve agreement between QM and MC simulations of low energy electron transport in condensed media. © 2012 American Association of Physicists in Medicine.

  10. Theoretical study of the ammonia nitridation rate on an Fe (100) surface: A combined density functional theory and kinetic Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeo, Sang Chul; Lee, Hyuck Mo, E-mail: hmlee@kaist.ac.kr; Lo, Yu Chieh

    2014-10-07

    Ammonia (NH{sub 3}) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (E{sub b}) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (E{sub b}) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH{sub 3} nitridation rate onmore » the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH{sub 3} nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH{sub 3} nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH{sub 3} nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.« less

  11. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Ellis; Derek Gaston; Benoit Forget

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less

  12. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    PubMed

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Performance Analysis of HF Band FB-MC-SS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussein Moradi; Stephen Andrew Laraway; Behrouz Farhang-Boroujeny

    Abstract—In a recent paper [1] the filter bank multicarrier spread spectrum (FB-MC-SS) waveform was proposed for wideband spread spectrum HF communications. A significant benefit of this waveform is robustness against narrow and partial band interference. Simulation results in [1] demonstrated good performance in a wideband HF channel over a wide range of conditions. In this paper we present a theoretical analysis of the bit error probably for this system. Our analysis tailors the results from [2] where BER performance was analyzed for maximum ration combining systems that accounted for correlation between subcarriers and channel estimation error. Equations are give formore » BER that closely match the simulated performance in most situations.« less

  14. Spatio-energetic cross talk in photon counting detectors: Detector model and correlated Poisson data generator.

    PubMed

    Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Stierstorfer, Karl; Kappler, Steffen

    2016-12-01

    An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the pixels. This is called double-counting with charge sharing. (A photoelectric effect with K-shell fluorescence x-ray emission would result in double-counting as well). As a result, PCD data are spatially and energetically correlated, although the output of individual PCD pixels is Poisson distributed. Major problems include the lack of a detector noise model for the spatio-energetic cross talk and lack of a computationally efficient simulation tool for generating correlated Poisson data. A Monte Carlo (MC) simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, the authors developed a new detector model and implemented it in an efficient software simulator that uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account: (1) detection efficiency; (2) incomplete charge collection and ballistic effect; (3) interaction with PCDs via photoelectric effect (with or without K-shell fluorescence x-ray emission, which may escape from the PCDs or be reabsorbed); and (4) electronic noise. The correlation was modeled by using these two simplifying assumptions: energy conservation and mutual exclusiveness. The mutual exclusiveness is that no more than two pixels measure energy from one photon. The effect of model parameters has been studied and results were compared with MC simulations. The agreement, with respect to the spectrum, was evaluated using the reduced χ 2 statistics or a weighted sum of squared errors, χ red 2 (≥1), where χ red 2 =1 indicates a perfect fit. The model produced spectra with flat field irradiation that qualitatively agree with previous studies. The spectra generated with different model and geometry parameters allowed for understanding the effect of the parameters on the spectrum and the correlation of data. The agreement between the model and MC data was very strong. The mean spectra with 90 keV and 140 kVp agreed exceptionally well: χ red 2 values were 1.049 with 90 keV data and 1.007 with 140 kVp data. The degrees of cross talk (in terms of the relative increase from single pixel irradiation to flat field irradiation) were 22% with 90 keV and 19% with 140 kVp for MC simulations, while they were 21% and 17%, respectively, for the model. The covariance was in strong agreement qualitatively, although it was overestimated. The noisy data generation was very efficient, taking less than a CPU minute as opposed to CPU hours for MC simulators. The authors have developed a novel, computationally efficient PCD model that takes into account double-counting and resulting spatio-energetic correlation between PCD pixels. The MC simulation validated the accuracy.

  15. Towards real-time photon Monte Carlo dose calculation in the cloud

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-01

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  16. Towards real-time photon Monte Carlo dose calculation in the cloud.

    PubMed

    Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-07

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  17. Predicting field-scale dispersion under realistic conditions with the polar Markovian velocity process model

    NASA Astrophysics Data System (ADS)

    Dünser, Simon; Meyer, Daniel W.

    2016-06-01

    In most groundwater aquifers, dispersion of tracers is dominated by flow-field inhomogeneities resulting from the underlying heterogeneous conductivity or transmissivity field. This effect is referred to as macrodispersion. Since in practice, besides a few point measurements the complete conductivity field is virtually never available, a probabilistic treatment is needed. To quantify the uncertainty in tracer concentrations from a given geostatistical model for the conductivity, Monte Carlo (MC) simulation is typically used. To avoid the excessive computational costs of MC, the polar Markovian velocity process (PMVP) model was recently introduced delivering predictions at about three orders of magnitude smaller computing times. In artificial test cases, the PMVP model has provided good results in comparison with MC. In this study, we further validate the model in a more challenging and realistic setup. The setup considered is derived from the well-known benchmark macrodispersion experiment (MADE), which is highly heterogeneous and non-stationary with a large number of unevenly scattered conductivity measurements. Validations were done against reference MC and good overall agreement was found. Moreover, simulations of a simplified setup with a single measurement were conducted in order to reassess the model's most fundamental assumptions and to provide guidance for model improvements.

  18. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir

    PubMed Central

    Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-01-01

    Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061

  19. SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.

    PubMed

    Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S

    2012-06-01

    With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.

  20. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  1. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning.

    PubMed

    Chetty, Indrin J; Curran, Bruce; Cygler, Joanna E; DeMarco, John J; Ezzell, Gary; Faddegon, Bruce A; Kawrakow, Iwan; Keall, Paul J; Liu, Helen; Ma, C M Charlie; Rogers, D W O; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V

    2007-12-01

    The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.

  2. McStas 1.1: a tool for building neutron Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.

    2000-03-01

    McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.

  3. Degradation of microcystin-LR by highly efficient AgBr/Ag3PO4/TiO2 heterojunction photocatalyst under simulated solar light irradiation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Utsumi, Motoo; Yang, Yingnan; Li, Dawei; Zhao, Yingxin; Zhang, Zhenya; Feng, Chuanping; Sugiura, Norio; Cheng, Jay Jiayang

    2015-01-01

    A novel photocatalyst AgBr/Ag3PO4/TiO2 was developed by a simple facile in situ deposition method and used for degradation of mirocystin-LR. TiO2 (P25) as a cost effective chemical was used to improve the stability of AgBr/Ag3PO4 under simulated solar light irradiation. The photocatalytic activity tests for this heterojunction were conducted under simulated solar light irradiation using methyl orange as targeted pollutant. The results indicated that the optimal Ag to Ti molar ratio for the photocatalytic activity of the resulting heterojunction AgBr/Ag3PO4/TiO2 was 1.5 (named as 1.5 BrPTi), which possessed higher photocatalytic capacity than AgBr/Ag3PO4. The 1.5 BrPTi heterojunction was also more stable than AgBr/Ag3PO4 in photocatalysis. This highly efficient and relatively stable photocatalyst was further tested for degradation of the hepatotoxin microcystin-LR (MC-LR). The results suggested that MC-LR was much more easily degraded by 1.5 BrPTi than by AgBr/Ag3PO4. The quenching effects of different scavengers proved that reactive h+ and •OH played important roles for MC-LR degradation.

  4. A fragment-based approach to the SAMPL3 Challenge

    NASA Astrophysics Data System (ADS)

    Kulp, John L.; Blumenthal, Seth N.; Wang, Qiang; Bryan, Richard L.; Guarnieri, Frank

    2012-05-01

    The success of molecular fragment-based design depends critically on the ability to make predictions of binding poses and of affinity ranking for compounds assembled by linking fragments. The SAMPL3 Challenge provides a unique opportunity to evaluate the performance of a state-of-the-art fragment-based design methodology with respect to these requirements. In this article, we present results derived from linking fragments to predict affinity and pose in the SAMPL3 Challenge. The goal is to demonstrate how incorporating different aspects of modeling protein-ligand interactions impact the accuracy of the predictions, including protein dielectric models, charged versus neutral ligands, ΔΔGs solvation energies, and induced conformational stress. The core method is based on annealing of chemical potential in a Grand Canonical Monte Carlo (GC/MC) simulation. By imposing an initially very high chemical potential and then automatically running a sequence of simulations at successively decreasing chemical potentials, the GC/MC simulation efficiently discovers statistical distributions of bound fragment locations and orientations not found reliably without the annealing. This method accounts for configurational entropy, the role of bound water molecules, and results in a prediction of all the locations on the protein that have any affinity for the fragment. Disregarding any of these factors in affinity-rank prediction leads to significantly worse correlation with experimentally-determined free energies of binding. We relate three important conclusions from this challenge as applied to GC/MC: (1) modeling neutral ligands—regardless of the charged state in the active site—produced better affinity ranking than using charged ligands, although, in both cases, the poses were almost exactly overlaid; (2) simulating explicit water molecules in the GC/MC gave better affinity and pose predictions; and (3) applying a ΔΔGs solvation correction further improved the ranking of the neutral ligands. Using the GC/MC method under a variety of parameters in the blinded SAMPL3 Challenge provided important insights to the relevant parameters and boundaries in predicting binding affinities using simulated annealing of chemical potential calculations.

  5. Traffic accident simulation : final report.

    DOT National Transportation Integrated Search

    1992-06-01

    The purpose of this research was to determine if HVOSM (Highway Vehicle Object Simulation Model) could be used to model a vehicle with a modern front (or rear) suspension system such as a McPherson strut and have the results of the dynamic model be v...

  6. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  7. Study on photon transport problem based on the platform of molecular optical simulation environment.

    PubMed

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.

  8. Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment

    PubMed Central

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737

  9. Raman Monte Carlo simulation for light propagation for tissue with embedded objects

    NASA Astrophysics Data System (ADS)

    Periyasamy, Vijitha; Jaafar, Humaira Bte; Pramanik, Manojit

    2018-02-01

    Monte Carlo (MC) stimulation is one of the prominent simulation technique and is rapidly becoming the model of choice to study light-tissue interaction. Monte Carlo simulation for light transport in multi-layered tissue (MCML) is adapted and modelled with different geometry by integrating embedded objects of various shapes (i.e., sphere, cylinder, cuboid and ellipsoid) into the multi-layered structure. These geometries would be useful in providing a realistic tissue structure such as modelling for lymph nodes, tumors, blood vessels, head and other simulation medium. MC simulations were performed on various geometric medium. Simulation of MCML with embedded object (MCML-EO) was improvised for propagation of the photon in the defined medium with Raman scattering. The location of Raman photon generation is recorded. Simulations were experimented on a modelled breast tissue with tumor (spherical and ellipsoidal) and blood vessels (cylindrical). Results were presented in both A-line and B-line scans for embedded objects to determine spatial location where Raman photons were generated. Studies were done for different Raman probabilities.

  10. Equations of state for the fully flexible WCA chains in the fluid and solid phases based on Wertheims-TPT2

    NASA Astrophysics Data System (ADS)

    Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid

    2018-03-01

    Based on Wertheim's second order thermodynamic perturbation theory (TPT2), equations of state (EOSs) are presented for the fluid and solid phases of tangent, freely jointed spheres. It is considered that the spheres interact with each other through the Weeks-Chandler-Anderson (WCA) potential. The developed TPT2 EOS is the sum of a monomeric reference term and a perturbation contribution due to bonding. MC NVT simulations are performed to determine the structural properties of the reference system in the reduced temperature range of 0.6 ≤ T* ≤ 4.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. Mathematical functions are fitted to the simulation results of the reference system and employed in the framework of Wertheim's theory to develop TPT2 EOSs for the fluid and solid phases. The extended EOSs are compared to the MC NPT simulation results of the compressibility factor and internal energy of the fully flexible chain systems. Simulations are performed for the WCA chain system for chain lengths of up to 15 at T* = 1.0, 1.5, 2.0, 3.0. Across all the reduced temperatures, the agreement between the results of the TPT2 EOS and MC simulations is remarkable. Overall Average Absolute Relative Percent Deviation at T* = 1.0 for the compressibility factor in the entire chain lengths we covered is 0.51 and 0.77 for the solid and fluid phases, respectively. Similar features are observed in the case of residual internal energy.

  11. Equations of state for the fully flexible WCA chains in the fluid and solid phases based on Wertheims-TPT2.

    PubMed

    Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid

    2018-03-14

    Based on Wertheim's second order thermodynamic perturbation theory (TPT2), equations of state (EOSs) are presented for the fluid and solid phases of tangent, freely jointed spheres. It is considered that the spheres interact with each other through the Weeks-Chandler-Anderson (WCA) potential. The developed TPT2 EOS is the sum of a monomeric reference term and a perturbation contribution due to bonding. MC NVT simulations are performed to determine the structural properties of the reference system in the reduced temperature range of 0.6 ≤ T* ≤ 4.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. Mathematical functions are fitted to the simulation results of the reference system and employed in the framework of Wertheim's theory to develop TPT2 EOSs for the fluid and solid phases. The extended EOSs are compared to the MC NPT simulation results of the compressibility factor and internal energy of the fully flexible chain systems. Simulations are performed for the WCA chain system for chain lengths of up to 15 at T* = 1.0, 1.5, 2.0, 3.0. Across all the reduced temperatures, the agreement between the results of the TPT2 EOS and MC simulations is remarkable. Overall Average Absolute Relative Percent Deviation at T* = 1.0 for the compressibility factor in the entire chain lengths we covered is 0.51 and 0.77 for the solid and fluid phases, respectively. Similar features are observed in the case of residual internal energy.

  12. Photocatalytic Removal of Microcystin-LR by Advanced WO3-Based Nanoparticles under Simulated Solar Light

    PubMed Central

    Zhao, Chao; Li, Dawei; Feng, Chuanping; Zhang, Zhenya; Sugiura, Norio; Yang, Yingnan

    2015-01-01

    A series of advanced WO3-based photocatalysts including CuO/WO3, Pd/WO3, and Pt/WO3 were synthesized for the photocatalytic removal of microcystin-LR (MC-LR) under simulated solar light. In the present study, Pt/WO3 exhibited the best performance for the photocatalytic degradation of MC-LR. The MC-LR degradation can be described by pseudo-first-order kinetic model. Chloride ion (Cl−) with proper concentration could enhance the MC-LR degradation. The presence of metal cations (Cu2+ and Fe3+) improved the photocatalytic degradation of MC-LR. This study suggests that Pt/WO3 photocatalytic oxidation under solar light is a promising option for the purification of water containing MC-LR. PMID:25884038

  13. SUPERNOVA DRIVING. I. THE ORIGIN OF MOLECULAR CLOUD TURBULENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padoan, Paolo; Pan, Liubin; Haugbølle, Troels

    2016-05-01

    Turbulence is ubiquitous in molecular clouds (MCs), but its origin is still unclear because MCs are usually assumed to live longer than the turbulence dissipation time. Interstellar medium (ISM) turbulence is likely driven by supernova (SN) explosions, but it has never been demonstrated that SN explosions can establish and maintain a turbulent cascade inside MCs consistent with the observations. In this work, we carry out a simulation of SN-driven turbulence in a volume of (250 pc){sup 3}, specifically designed to test if SN driving alone can be responsible for the observed turbulence inside MCs. We find that SN driving establishesmore » a velocity scaling consistent with the usual scaling laws of supersonic turbulence, suggesting that previous idealized simulations of MC turbulence, driven with a random, large-scale volume force, were correctly adopted as appropriate models for MC turbulence, despite the artificial driving. We also find that the same scaling laws extend to the interiors of MCs, and that the velocity–size relation of the MCs selected from our simulation is consistent with that of MCs from the Outer-Galaxy Survey, the largest MC sample available. The mass–size relation and the mass and size probability distributions also compare successfully with those of the Outer Galaxy Survey. Finally, we show that MC turbulence is super-Alfvénic with respect to both the mean and rms magnetic-field strength. We conclude that MC structure and dynamics are the natural result of SN-driven turbulence.« less

  14. Poster — Thur Eve — 46: Monte Carlo model of the Novalis Classic 6MV stereotactic linear accelerator using the GATE simulation platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiebe, J; Department of Physics and Astronomy, University of Calgary, Calgary, AB; Ploquin, N

    2014-08-15

    Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4more » (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.« less

  15. Risk Assessment and Prediction of Flyrock Distance by Combined Multiple Regression Analysis and Monte Carlo Simulation of Quarry Blasting

    NASA Astrophysics Data System (ADS)

    Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh

    2016-09-01

    Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.

  16. SU-E-T-535: Proton Dose Calculations in Homogeneous Media.

    PubMed

    Chapman, J; Fontenot, J; Newhauser, W; Hogstrom, K

    2012-06-01

    To develop a pencil beam dose calculation algorithm for scanned proton beams that improves modeling of scatter events. Our pencil beam algorithm (PBA) was developed for calculating dose from monoenergetic, parallel proton beams in homogeneous media. Fermi-Eyges theory was implemented for pencil beam transport. Elastic and nonelastic scatter effects were each modeled as a Gaussian distribution, with root mean square (RMS) widths determined from theoretical calculations and a nonlinear fit to a Monte Carlo (MC) simulated 1mm × 1mm proton beam, respectively. The PBA was commissioned using MC simulations in a flat water phantom. Resulting PBA calculations were compared with results of other models reported in the literature on the basis of differences between PBA and MC calculations of 80-20% penumbral widths. Our model was further tested by comparing PBA and MC results for oblique beams (45 degree incidence) and surface irregularities (step heights of 1 and 4 cm) for energies of 50-250 MeV and field sizes of 4cm × 4cm and 10cm × 10cm. Agreement between PBA and MC distributions was quantified by computing the percentage of points within 2% dose difference or 1mm distance to agreement. Our PBA improved agreement between calculated and simulated penumbral widths by an order of magnitude compared with previously reported values. For comparisons of oblique beams and surface irregularities, agreement between PBA and MC distributions was better than 99%. Our algorithm showed improved accuracy over other models reported in the literature in predicting the overall shape of the lateral profile through the Bragg peak. This improvement was achieved by incorporating nonelastic scatter events into our PBA. The increased modeling accuracy of our PBA, incorporated into a treatment planning system, may improve the reliability of treatment planning calculations for patient treatments. This research was supported by contract W81XWH-10-1-0005 awarded by The U.S. Army Research Acquisition Activity, 820 Chandler Street, Fort Detrick, MD 21702-5014. This report does not necessarily reflect the position or policy of the Government, and no official endorsement should be inferred. © 2012 American Association of Physicists in Medicine.

  17. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  18. Development of Simulation Methods in the Gibbs Ensemble to Predict Polymer-Solvent Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Gartner, Thomas; Epps, Thomas; Jayaraman, Arthi

    Solvent vapor annealing (SVA) of polymer thin films is a promising method for post-deposition polymer film morphology control. The large number of important parameters relevant to SVA (polymer, solvent, and substrate chemistries, incoming film condition, annealing and solvent evaporation conditions) makes systematic experimental study of SVA a time-consuming endeavor, motivating the application of simulation and theory to the SVA system to provide both mechanistic insight and scans of this wide parameter space. However, to rigorously treat the phase equilibrium between polymer film and solvent vapor while still probing the dynamics of SVA, new simulation methods must be developed. In this presentation, we compare two methods to study polymer-solvent phase equilibrium-Gibbs Ensemble Molecular Dynamics (GEMD) and Hybrid Monte Carlo/Molecular Dynamics (Hybrid MC/MD). Liquid-vapor equilibrium results are presented for the Lennard Jones fluid and for coarse-grained polymer-solvent systems relevant to SVA. We found that the Hybrid MC/MD method is more stable and consistent than GEMD, but GEMD has significant advantages in computational efficiency. We propose that Hybrid MC/MD simulations be used for unfamiliar systems in certain choice conditions, followed by much faster GEMD simulations to map out the remainder of the phase window.

  19. A preliminary study of in-house Monte Carlo simulations: an integrated Monte Carlo verification system.

    PubMed

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  20. Modelling the structural response of cotton plants to mepiquat chloride and population density

    PubMed Central

    Gu, Shenghao; Evers, Jochem B.; Zhang, Lizhen; Mao, Lili; Zhang, Siping; Zhao, Xinhua; Liu, Shaodong; van der Werf, Wopke; Li, Zhaohu

    2014-01-01

    Background and Aims Cotton (Gossypium hirsutum) has indeterminate growth. The growth regulator mepiquat chloride (MC) is used worldwide to restrict vegetative growth and promote boll formation and yield. The effects of MC are modulated by complex interactions with growing conditions (nutrients, weather) and plant population density, and as a result the effects on plant form are not fully understood and are difficult to predict. The use of MC is thus hard to optimize. Methods To explore crop responses to plant density and MC, a functional–structural plant model (FSPM) for cotton (named CottonXL) was designed. The model was calibrated using 1 year's field data, and validated by using two additional years of detailed experimental data on the effects of MC and plant density in stands of pure cotton and in intercrops of cotton with wheat. CottonXL simulates development of leaf and fruits (square, flower and boll), plant height and branching. Crop development is driven by thermal time, population density, MC application, and topping of the main stem and branches. Key Results Validation of the model showed good correspondence between simulated and observed values for leaf area index with an overall root-mean-square error of 0·50 m2 m−2, and with an overall prediction error of less than 10 % for number of bolls, plant height, number of fruit branches and number of phytomers. Canopy structure became more compact with the decrease of leaf area index and internode length due to the application of MC. Moreover, MC did not have a substantial effect on boll density but increased lint yield at higher densities. Conclusions The model satisfactorily represents the effects of agronomic measures on cotton plant structure. It can be used to identify optimal agronomic management of cotton to achieve optimal plant structure for maximum yield under varying environmental conditions. PMID:24489020

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nusrat, H; Pang, G; Sarfehnia, A

    Purpose: This work seeks to develop a beam quality meter using multiple differently doped plastic scintillators that are thus intrinsically beam-quality dependent. Plastic scintillators spontaneously emit visible light upon irradiation; the amount of light produced is dependent on stopping power (closely related to LET) according to Birks’ law. Doping plastic scintillators can be used to tune their sensitivity to specific LET ranges. Methods: GEANT4.10.1 Monte Carlo (MC) was used to evaluate the response of various scintillator dopant combinations. MC radiation transport and scintillator light response were validated against previously published literature. Current work involves evaluating detector response experimentally; to thatmore » end, a detector prototype with interchangeable scintillator housing was constructed. Measurement set-up guides light emitted by the scintillator to a photomultiplier tube via a glass taper junction coupled to an optical fiber. The resulting signal is measured by an electrometer, and normalized to dose readout from a diode. Measurements have been done using clinical electron and orthovoltage beams. MC response (simulated scintillator light normalized to dose scored inside the scintillating volume) was evaluated for four different LET radiations for an undoped and 1%Pb doped scintillator (σ=0.85%). Simulated incident electrons included: 0.05, 0.1, 0.2, 6, 12, and 18 MeV; these energies correspond to a range of stopping power (related to LET) values ranging from 1.824 to 11.09 MeVcm{sup 2}g{sup −1} (SCOL from NIST-ESTAR). Results: Initial MC results show a distinct divergence in scintillator response as LET increases. The response for undoped plastic scintillator indicated a 35.0% increase in signal when going from 18 MeV (low LET) to 0.05 MeV (high LET) while 1%-Pb doped scintillator indicated a 100.9% increase. Conclusion: After validating MC against measurement, simulations will be used to test various concentrations (2%, 4%, 6%) of different high-Z material dopants (W, Mo) to optimize the scintillator types for the beam quality meter. NSERC Discovery Grant RGPIN-435608.« less

  2. Deformation of the Durom Acetabular Component and Its Impact on Tribology in a Cadaveric Model—A Simulator Study

    PubMed Central

    Gu, Yanqing; Wang, Qing; Cui, Weiding; Fan, Weimin

    2012-01-01

    Background Recent studies have shown that the acetabular component frequently becomes deformed during press-fit insertion. The aim of this study was to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the Durom large head metal-on-metal (MOM) total hips in simulators. Methods Six Durom cups impacted into reamed acetabula of fresh cadavers were used as the experimental group and another 6 size-paired intact Durom cups constituted the control group. All 12 Durom MOM total hips were put through a 3 million cycle (MC) wear test in simulators. Results The 6 cups in the experimental group were all deformed, with a mean deformation of 41.78±8.86 µm. The average volumetric wear rate in the experimental group and in the control group in the first million cycle was 6.65±0.29 mm3/MC and 0.89±0.04 mm3/MC (t = 48.43, p = 0.000). The ion levels of Cr and Co in the experimental group were also higher than those in the control group before 2.0 MC. However there was no difference in the ion levels between 2.0 and 3.0 MC. Conclusions This finding implies that the non-modular acetabular component of Durom total hip prosthesis is likely to become deformed during press-fit insertion, and that the deformation will result in increased volumetric wear and increased ion release. Clinical Relevance This study was determined to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the prosthesis. Deformation of the cup after implantation increases the wear of MOM bearings and the resulting ion levels. The clinical use of the Durom large head prosthesis should be with great care. PMID:23144694

  3. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less

  4. Three dimensional electrochemical simulation of solid oxide fuel cell cathode based on microstructure reconstructed by marching cubes method

    NASA Astrophysics Data System (ADS)

    He, An; Gong, Jiaming; Shikazono, Naoki

    2018-05-01

    In the present study, a model is introduced to correlate the electrochemical performance of solid oxide fuel cell (SOFC) with the 3D microstructure reconstructed by focused ion beam scanning electron microscopy (FIB-SEM) in which the solid surface is modeled by the marching cubes (MC) method. Lattice Boltzmann method (LBM) is used to solve the governing equations. In order to maintain the geometries reconstructed by the MC method, local effective diffusivities and conductivities computed based on the MC geometries are applied in each grid, and partial bounce-back scheme is applied according to the boundary predicted by the MC method. From the tortuosity factor and overpotential calculation results, it is concluded that the MC geometry drastically improves the computational accuracy by giving more precise topology information.

  5. MO-E-17A-03: Monte Carlo CT Dose Calculation: A Comparison Between Experiment and Simulation Using ARCHER-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Du, X; Su, L

    2014-06-15

    Purpose: To compare the CT doses derived from the experiments and GPU-based Monte Carlo (MC) simulations, using a human cadaver and ATOM phantom. Methods: The cadaver of an 88-year old male and the ATOM phantom were scanned by a GE LightSpeed Pro 16 MDCT. For the cadaver study, the Thimble chambers (Model 10×5−0.6CT and 10×6−0.6CT) were used to measure the absorbed dose in different deep and superficial organs. Whole-body scans were first performed to construct a complete image database for MC simulations. Abdomen/pelvis helical scans were then conducted using 120/100 kVps, 300 mAs and a pitch factor of 1.375:1. Formore » the ATOM phantom study, the OSL dosimeters were used and helical scans were performed using 120 kVp and x, y, z tube current modulation (TCM). For the MC simulations, sufficient particles were run in both cases such that the statistical errors of the results by ARCHER-CT were limited to 1%. Results: For the human cadaver scan, the doses to the stomach, liver, colon, left kidney, pancreas and urinary bladder were compared. The difference between experiments and simulations was within 19% for the 120 kVp and 25% for the 100 kVp. For the ATOM phantom scan, the doses to the lung, thyroid, esophagus, heart, stomach, liver, spleen, kidneys and thymus were compared. The difference was 39.2% for the esophagus, and within 16% for all other organs. Conclusion: In this study the experimental and simulated CT doses were compared. Their difference is primarily attributed to the systematic errors of the MC simulations, including the accuracy of the bowtie filter modeling, and the algorithm to generate voxelized phantom from DICOM images. The experimental error is considered small and may arise from the dosimeters. R01 grant (R01EB015478) from National Institute of Biomedical Imaging and Bioengineering.« less

  6. Development of a polarized neutron beam line at Algerian research reactors using McStas software

    NASA Astrophysics Data System (ADS)

    Makhloufi, M.; Salah, H.

    2017-02-01

    Unpolarized instrumentation has long been studied and designed using McStas simulation tool. But, only recently new models were developed for McStas to simulate polarized neutron scattering instruments. In the present contribution, we used McStas software to design a polarized neutron beam line, taking advantage of the available spectrometers reflectometer and diffractometer in Algeria. Both thermal and cold neutron was considered. The polarization was made by two types of supermirrors polarizers FeSi and CoCu provided by the HZB institute. For sake of performance and comparison, the polarizers were characterized and their characteristics reproduced. The simulated instruments are reported. Flipper and electromagnets for guide field are developed. Further developments including analyzers and upgrading of the existing spectrometers are underway.

  7. Comparison of Fluka-2006 Monte Carlo Simulation and Flight Data for the ATIC Detector

    NASA Technical Reports Server (NTRS)

    Gunasingha, R.M.; Fazely, A.R.; Adams, J.H.; Ahn, H.S.; Bashindzhagyan, G.L.; Chang, J.; Christl, M.; Ganel, O.; Guzik, T.G.; Isbert, J.; hide

    2007-01-01

    We have performed a detailed Monte Carlo (MC) simulation for the Advanced Thin Ionization Calorimeter (ATIC) detector using the MC code FLUKA-2006 which is capable of simulating particles up to 10 PeV. The ATIC detector has completed two successful balloon flights from McMurdo, Antarctica lasting a total of more than 35 days. ATIC is designed as a multiple, long duration balloon flight, investigation of the cosmic ray spectra from below 50 GeV to near 100 TeV total energy; using a fully active Bismuth Germanate(BGO) calorimeter. It is equipped with a large mosaic of.silicon detector pixels capable of charge identification, and, for particle tracking, three projective layers of x-y scintillator hodoscopes, located above, in the middle and below a 0.75 nuclear interaction length graphite target. Our simulations are part of an analysis package of both nuclear (A) and energy dependences for different nuclei interacting in the ATIC detector. The MC simulates the response of different components of the detector such as the Si-matrix, the scintillator hodoscopes and the BGO calorimeter to various nuclei. We present comparisons of the FLUKA-2006 MC calculations with GEANT calculations and with the ATIC CERN data and ATIC flight data.

  8. Radio Frequency Scanning and Simulation of Oriented Strand Board Material Property

    NASA Astrophysics Data System (ADS)

    Liu, Xiaojian; Zhang, Jilei; Steele, Philip. H.; Donohoe, J. Patrick

    2008-02-01

    Oriented strandboard (OSB) is a wood composite product with the largest market share in U.S. residential and commercial construction. Wood specific gravity (SG) and moisture content (MC) play an important role in the OSB manufacturing process. They are the two of the critical variables that manufacturers are required to monitor, locate, and control in order to produce a product with consistent quality. In this study, radio frequency scanning nondestructive evaluation (NDE) technologies evaluated the local area MC and SG of OSB panels following panel production by hot pressing. A finite element software simulation tool was used to optimize the sensor geometry and for investigating the interaction between electromagnetic field and wood dielectric properties. Our results indicate the RF scanning response is closely correlated to the MC and SG variations in OSB panels. Radio frequency NDE appears to have potential as an effective method for insuring OSB panel quality during manufacturing.

  9. New developments in the McStas neutron instrument simulation package

    NASA Astrophysics Data System (ADS)

    Willendrup, P. K.; Knudsen, E. B.; Klinkby, E.; Nielsen, T.; Farhi, E.; Filges, U.; Lefmann, K.

    2014-07-01

    The McStas neutron ray-tracing software package is a versatile tool for building accurate simulators of neutron scattering instruments at reactors, short- and long-pulsed spallation sources such as the European Spallation Source. McStas is extensively used for design and optimization of instruments, virtual experiments, data analysis and user training. McStas was founded as a scientific, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of the main new developments in McStas 2.0 (December 2012) and McStas 2.1 (expected fall 2013), including many new components, component parameter uniformisation, partial loss of backward compatibility, updated source brilliance descriptions, developments toward new tools and user interfaces, web interfaces and a new method for estimating beam losses and background from neutron optics.

  10. Monte Carlo study of LDR seed dosimetry with an application in a clinical brachytherapy breast implant.

    PubMed

    Furstoss, C; Reniers, B; Bertrand, M J; Poon, E; Carrier, J-F; Keller, B M; Pignol, J P; Beaulieu, L; Verhaegen, F

    2009-05-01

    A Monte Carlo (MC) study was carried out to evaluate the effects of the interseed attenuation and the tissue composition for two models of 125I low dose rate (LDR) brachytherapy seeds (Medi-Physics 6711, IBt InterSource) in a permanent breast implant. The effect of the tissue composition was investigated because the breast localization presents heterogeneities such as glandular and adipose tissue surrounded by air, lungs, and ribs. The absolute MC dose calculations were benchmarked by comparison to the absolute dose obtained from experimental results. Before modeling a clinical case of an implant in heterogeneous breast, the effects of the tissue composition and the interseed attenuation were studied in homogeneous phantoms. To investigate the tissue composition effect, the dose along the transverse axis of the two seed models were calculated and compared in different materials. For each seed model, three seeds sharing the same transverse axis were simulated to evaluate the interseed effect in water as a function of the distance from the seed. A clinical study of a permanent breast 125I implant for a single patient was carried out using four dose calculation techniques: (1) A TG-43 based calculation, (2) a full MC simulation with realistic tissues and seed models, (3) a MC simulation in water and modeled seeds, and (4) a MC simulation without modeling the seed geometry but with realistic tissues. In the latter, a phase space file corresponding to the particles emitted from the external surface of the seed is used at each seed location. The results were compared by calculating the relevant clinical metrics V85, V100, and V200 for this kind of treatment in the target. D90 and D50 were also determined to evaluate the differences in dose and compare the results to the studies published for permanent prostate seed implants in literature. The experimental results are in agreement with the MC absolute doses (within 5% for EBT Gafchromic film and within 7% for TLD-100). Important differences between the dose along the transverse axis of the seed in water and in adipose tissue are obtained (10% at 3.5 cm). The comparisons between the full MC and the TG-43 calculations show that there are no significant differences for V85 and V100. For V200, 8.4% difference is found coming mainly from the tissue composition effect. Larger differences (about 10.5% for the model 6711 seed and about 13% for the InterSource125) are determined for D90 and D50. These differences depend on the composition of the breast tissue modeled in the simulation. A variation in percentage by mass of the mammary gland and adipose tissue can cause important differences in the clinical dose metrics V200, D90, and D50. Even if the authors can conclude that clinically, the differences in V85, V100, and V200 are acceptable in comparison to the large variation in dose in the treated volume, this work demonstrates that the development of a MC treatment planning system for LDR brachytherapy will improve the dose determination in the treated region and consequently the dose-outcome relationship, especially for the skin toxicity.

  11. Importance of including ammonium sulfate ((NH4)2SO4) aerosols for ice cloud parameterization in GCMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharjee, P. S.; Sud, Yogesh C.; Liu, Xiaohong

    2010-02-22

    A common deficiency of many cloud-physics parameterizations including the NASA’s microphysics of clouds with aerosol- cloud interactions (hereafter called McRAS-AC) is that they simulate less (larger) than the observed ice cloud particle number (size). A single column model (SCM) of McRAS-AC and Global Circulation Model (GCM) physics together with an adiabatic parcel model (APM) for ice-cloud nucleation (IN) of aerosols were used to systematically examine the influence of ammonium sulfate ((NH4)2SO4) aerosols, not included in the present formulations of McRAS-AC. Specifically, the influence of (NH4)2SO4 aerosols on the optical properties of both liquid and ice clouds were analyzed. First anmore » (NH4)2SO4 parameterization was included in the APM to assess its effect vis-à-vis that of the other aerosols. Subsequently, several evaluation tests were conducted over the ARM-SGP and thirteen other locations (sorted into pristine and polluted conditions) distributed over marine and continental sites with the SCM. The statistics of the simulated cloud climatology were evaluated against the available ground and satellite data. The results showed that inclusion of (NH4)2SO4 in the SCM made a remarkable improvement in the simulated effective radius of ice clouds. However, the corresponding ice-cloud optical thickness increased more than is observed. This can be caused by lack of cloud advection and evaporation. We argue that this deficiency can be mitigated by adjusting the other tunable parameters of McRAS-AC such as precipitation efficiency. Inclusion of ice cloud particle splintering introduced through well- established empirical equations is found to further improve the results. Preliminary tests show that these changes make a substantial improvement in simulating the cloud optical properties in the GCM, particularly by simulating a far more realistic cloud distribution over the ITCZ.« less

  12. Simulation of temperature distribution in tumor Photothermal treatment

    NASA Astrophysics Data System (ADS)

    Zhang, Xiyang; Qiu, Shaoping; Wu, Shulian; Li, Zhifang; Li, Hui

    2018-02-01

    The light transmission in biological tissue and the optical properties of biological tissue are important research contents of biomedical photonics. It is of great theoretical and practical significance in medical diagnosis and light therapy of disease. In this paper, the temperature feedback-controller was presented for monitoring photothermal treatment in realtime. Two-dimensional Monte Carlo (MC) and diffuse approximation were compared and analyzed. The results demonstrated that diffuse approximation using extrapolated boundary conditions by finite element method is a good approximation to MC simulation. Then in order to minimize thermal damage, real-time temperature monitoring was appraised by proportional-integral-differential (PID) controller in the process of photothermal treatment.

  13. Designing new guides and instruments using McStas

    NASA Astrophysics Data System (ADS)

    Farhi, E.; Hansen, T.; Wildes, A.; Ghosh, R.; Lefmann, K.

    With the increasing complexity of modern neutron-scattering instruments, the need for powerful tools to optimize their geometry and physical performances (flux, resolution, divergence, etc.) has become essential. As the usual analytical methods reach their limit of validity in the description of fine effects, the use of Monte Carlo simulations, which can handle these latter, has become widespread. The McStas program was developed at Riso National Laboratory in order to provide neutron scattering instrument scientists with an efficient and flexible tool for building Monte Carlo simulations of guides, neutron optics and instruments [1]. To date, the McStas package has been extensively used at the Institut Laue-Langevin, Grenoble, France, for various studies including cold and thermal guides with ballistic geometry, diffractometers, triple-axis, backscattering and time-of-flight spectrometers [2]. In this paper, we present some simulation results concerning different guide geometries that may be used in the future at the Institut Laue-Langevin. Gain factors ranging from two to five may be obtained for the integrated intensities, depending on the exact geometry, the guide coatings and the source.

  14. SU-F-T-33: Air-Kerma Strength and Dose Rate Constant by the Full Monte Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsuji, S; Oita, M; Narihiro, N

    2016-06-15

    Purpose: In general, the air-kerma strength (Sk) has been determined by the energy weighting the photon energy fluence and the corresponding mass-energy absorption coefficient or mass-energy transfer coefficient. Kerma is an acronym for kinetic energy released per unit mass, defined as the sum of the initial kinetic energies of all the charged particles. Monte Carlo (MC) simulations can investigate the kinetic energy of the charged particles after photo interactions and sum the energy. The Sk of {sup 192}Ir source is obtained in the full MC simulation and finally the dose rate constant Λ is determine. Methods: MC simulations were performedmore » using EGS5 with the microSelectron HDR v2 type of {sup 192}Ir source. The air-kerma rate obtained to sum the electron kinetic energy after photoelectric absorption or Compton scattering for transverse-axis distance from 1 to 120 cm with a 10 m diameter air phantom. Absorbed dose in water is simulated with a 30 cm diameter water phantom. The transport cut-off energy is 10 keV and primary photons from the source need two hundred and forty billion in the air-kerma rate and thirty billion in absorbed dose in water. Results: Sk is multiplied by the square of the distance in air-kerma rate and determined by fitting a linear function. The result of Sk is (2.7039±0.0085)*10-{sup −11} µGy m{sup 2} Bq{sup −1} s{sup −1}. Absorbed dose rate in water at 1 cm transverse-axis distance D(r{sub 0}, θ{sub 0}) is (3.0114±0.0015)*10{sup −11} cGy Bq{sup −1} s{sup −1}. Conclusion: From the results, dose rate constant Λ of the microSelectron HDR v2 type of {sup 192}Ir source is (1.1137±0.0035) cGy h{sup −1} U{sup −1} by the full MC simulations. The consensus value conΛ is (1.109±0.012) cGy h{sup −1} U{sup −1}. The result value is consistent with the consensus data conΛ.« less

  15. Validation of GPU-accelerated superposition-convolution dose computations for the Small Animal Radiation Research Platform.

    PubMed

    Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John

    2018-05-01

    The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.

  16. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  17. Application of the MCNPX-McStas interface for shielding calculations and guide design at ESS

    NASA Astrophysics Data System (ADS)

    Klinkby, E. B.; Knudsen, E. B.; Willendrup, P. K.; Lauritzen, B.; Nonbøl, E.; Bentley, P.; Filges, U.

    2014-07-01

    Recently, an interface between the Monte Carlo code MCNPX and the neutron ray-tracing code MCNPX was developed [1, 2]. Based on the expected neutronic performance and guide geometries relevant for the ESS, the combined MCNPX-McStas code is used to calculate dose rates along neutron beam guides. The generation and moderation of neutrons is simulated using a full scale MCNPX model of the ESS target monolith. Upon entering the neutron beam extraction region, the individual neutron states are handed to McStas via the MCNPX-McStas interface. McStas transports the neutrons through the beam guide, and by using newly developed event logging capability, the neutron state parameters corresponding to un-reflected neutrons are recorded at each scattering. This information is handed back to MCNPX where it serves as neutron source input for a second MCNPX simulation. This simulation enables calculation of dose rates in the vicinity of the guide. In addition the logging mechanism is employed to record the scatterings along the guides which is exploited to simulate the supermirror quality requirements (i.e. m-values) needed at different positions along the beam guide to transport neutrons in the same guide/source setup.

  18. Scaling up watershed model parameters--Flow and load simulations of the Edisto River Basin

    USGS Publications Warehouse

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    The Edisto River is the longest and largest river system completely contained in South Carolina and is one of the longest free flowing blackwater rivers in the United States. The Edisto River basin also has fish-tissue mercury concentrations that are some of the highest recorded in the United States. As part of an effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River basin, analyses and simulations of the hydrology of the Edisto River basin were made with the topography-based hydrological model (TOPMODEL). The potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River basin, was assessed. Scaling up was done in a step-wise process beginning with applying the calibration parameters, meteorological data, and topographic wetness index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made with subsequent simulations culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River basin and updated calibration parameters for some of the TOPMODEL calibration parameters. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the two models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the significant difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variables in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD-H, and LOADEST. Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek basin were used in the water-quality load models.

  19. SU-G-201-13: Investigation of Dose Variation Induced by HDR Ir-192 Source Global Shift Within the Varian Ring Applicator Using Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y; Cai, J; Meltsner, S

    2016-06-15

    Purpose: The Varian tandem and ring applicators are used to deliver HDR Ir-192 brachytherapy for cervical cancer. The source path within the ring is hard to predict due to the larger interior ring lumen. Some studies showed the source could be several millimeters different from planned positions, while other studies demonstrated minimal dosimetric impact. A global shift can be applied to limit the effect of positioning offsets. The purpose of this study was to assess the necessities of implementing a global source shift using Monte Carlo (MC) simulations. Methods: The MCNP5 radiation transport code was used for all MC simulations.more » To accommodate TG-186 guidelines and eliminate inter-source attenuation, a BrachyVision plan with 10 dwell positions (0.5cm step sizes) was simulated as the summation of 10 individual sources with equal dwell times for simplification. To simplify the study, the tandem was also excluded from the MC model. Global shifts of ±0.1, ±0.3, ±0.5 cm were then simulated as distal and proximal from the reference positions. Dose was scored in water for all MC simulations and was normalized to 100% at the normalization point 0.5 cm from the cap in the ring plane. For dose comparison, Point A was 2 cm caudal from the buildup cap and 2 cm lateral on either side of the ring axis. With seventy simulations, 108 photon histories gave a statistical uncertainties (k=1) <2% for (0.1 cm)3 voxels. Results: Compared to no global shift, average Point A doses were 0.0%, 0.4%, and 2.2% higher for distal global shifts, and 0.4%, 2.8%, and 5.1% higher for proximal global shifts, respectively. The MC Point A doses differed by < 1% when compared to BrachyVision. Conclusion: Dose variations were not substantial for ±0.3 cm global shifts, which is common in clinical practice.« less

  20. JSC-1: Lunar Simulant of Choice for Geotechnical Applications and Oxygen Production

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence A.; Hill, Eddy; Liu, Yang; Day, James M. D.

    2005-01-01

    Lunar simulant JSC-1 was produced as the result of a workshop held in 1991 to evaluate the status of simulated lunar material and to make recommendations on future requirements and production of such material (McKay et al., 1991). JSC-1 was prepared from a welded tuff that was mined, crushed, and sized from the Pleistocene San Francisco volcanic field, northern Arizona. As the initial production of approxiamtely 12,300kgs is nearly depleted, new production has commenced. The mineralogy and chemical properties of JSC-1 are described in McKay et al. (1994) and Hill et al. (this volume); description of its geotechnical properties appears in Klosky et al. (1996). Although other lunar-soil simulants have been produced (e.g., MLS-1: Weiblen et al., 1990; Desai et al., 1992; Chua et al., 1994), they have not been as well standardized as JSC-I; this makes it difficult to standardize results from tests performed on these simulants. Here, we provide an overview of the composition, mineralogy, strength and deformation properties, and potential uses of JSC-1 and outline why it is presently the 'lunar simulant of choice' for geotechnical applications and as a proxy for lunar-oxygen production.

  1. Surface tension of undercooled liquid cobalt

    NASA Astrophysics Data System (ADS)

    Yao, W. J.; Han, X. J.; Chen, M.; Wei, B.; Guo, Z. Y.

    2002-08-01

    This paper provides the results on experimentally measured and numerically predicted surface tensions of undercooled liquid cobalt. The experiments were performed by using the oscillation drop technique combined with electromagnetic levitation. The simulations are carried out with the Monte Carlo (MC) method, where the surface tension is predicted through calculations of the work of cohesion, and the interatomic interaction is described with an embedded-atom method. The maximum undercooling of the liquid cobalt is reached at 231 K (0.13Tm) in the experiment and 268 K (0.17Tm) in the simulation. The surface tension and its relationship with temperature obtained in the experiment and simulation are σexp = 1.93 - 0.000 33 (T - T m) N m-1 and σcal = 2.26 - 0.000 32 (T - T m) N m-1 respectively. The temperature dependence of the surface tension calculated from the MC simulation is in reasonable agreement with that measured in the experiment.

  2. Toward high-efficiency and detailed Monte Carlo simulation study of the granular flow spallation target

    NASA Astrophysics Data System (ADS)

    Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei

    2018-02-01

    The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.

  3. Downlink Probability Density Functions for EOS-McMurdo Sound

    NASA Technical Reports Server (NTRS)

    Christopher, P.; Jackson, A. H.

    1996-01-01

    The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.

  4. Assessment of the dose distribution inside a cardiac cath lab using TLD measurements and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Baptista, M.; Teles, P.; Cardoso, G.; Vaz, P.

    2014-11-01

    Over the last decade, there was a substantial increase in the number of interventional cardiology procedures worldwide, and the corresponding ionizing radiation doses for both the medical staff and patients became a subject of concern. Interventional procedures in cardiology are normally very complex, resulting in long exposure times. Also, these interventions require the operator to work near the patient and, consequently, close to the primary X-ray beam. Moreover, due to the scattered radiation from the patient and the equipment, the medical staff is also exposed to a non-uniform radiation field that can lead to a significant exposure of sensitive body organs and tissues, such as the eye lens, the thyroid and the extremities. In order to better understand the spatial variation of the dose and dose rate distributions during an interventional cardiology procedure, the dose distribution around a C-arm fluoroscopic system, in operation in a cardiac cath lab at Portuguese Hospital, was estimated using both Monte Carlo (MC) simulations and dosimetric measurements. To model and simulate the cardiac cath lab, including the fluoroscopic equipment used to execute interventional procedures, the state-of-the-art MC radiation transport code MCNPX 2.7.0 was used. Subsequently, Thermo-Luminescent Detector (TLD) measurements were performed, in order to validate and support the simulation results obtained for the cath lab model. The preliminary results presented in this study reveal that the cardiac cath lab model was successfully validated, taking into account the good agreement between MC calculations and TLD measurements. The simulated results for the isodose curves related to the C-arm fluoroscopic system are also consistent with the dosimetric information provided by the equipment manufacturer (Siemens). The adequacy of the implemented computational model used to simulate complex procedures and map dose distributions around the operator and the medical staff is discussed, in view of the optimization principle (and the associated ALARA objective), one of the pillars of the international system of radiological protection.

  5. Simulation of streamflow in the McTier Creek watershed, South Carolina

    USGS Publications Warehouse

    Feaster, Toby D.; Golden, Heather E.; Odom, Kenneth R.; Lowery, Mark A.; Conrads, Paul; Bradley, Paul M.

    2010-01-01

    The McTier Creek watershed is located in the Sand Hills ecoregion of South Carolina and is a small catchment within the Edisto River Basin. Two watershed hydrology models were applied to the McTier Creek watershed as part of a larger scientific investigation to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin. The two models are the topography-based hydrological model (TOPMODEL) and the grid-based mercury model (GBMM). TOPMODEL uses the variable-source area concept for simulating streamflow, and GBMM uses a spatially explicit modified curve-number approach for simulating streamflow. The hydrologic output from TOPMODEL can be used explicitly to simulate the transport of mercury in separate applications, whereas the hydrology output from GBMM is used implicitly in the simulation of mercury fate and transport in GBMM. The modeling efforts were a collaboration between the U.S. Geological Survey and the U.S. Environmental Protection Agency, National Exposure Research Laboratory. Calibrations of TOPMODEL and GBMM were done independently while using the same meteorological data and the same period of record of observed data. Two U.S. Geological Survey streamflow-gaging stations were available for comparison of observed daily mean flow with simulated daily mean flow-station 02172300, McTier Creek near Monetta, South Carolina, and station 02172305, McTier Creek near New Holland, South Carolina. The period of record at the Monetta gage covers a broad range of hydrologic conditions, including a drought and a significant wet period. Calibrating the models under these extreme conditions along with the normal flow conditions included in the record enhances the robustness of the two models. Several quantitative assessments of the goodness of fit between model simulations and the observed daily mean flows were done. These included the Nash-Sutcliffe coefficient of model-fit efficiency index, Pearson's correlation coefficient, the root mean square error, the bias, and the mean absolute error. In addition, a number of graphical tools were used to assess how well the models captured the characteristics of the observed data at the Monetta and New Holland streamflow-gaging stations. The graphical tools included temporal plots of simulated and observed daily mean flows, flow-duration curves, single-mass curves, and various residual plots. The results indicated that TOPMODEL and GBMM generally produced simulations that reasonably capture the quantity, variability, and timing of the observed streamflow. For the periods modeled, the total volume of simulated daily mean flows as compared to the total volume of the observed daily mean flow from TOPMODEL was within 1 to 5 percent, and the total volume from GBMM was within 1 to 10 percent. A noticeable characteristic of the simulated hydrographs from both models is the complexity of balancing groundwater recession and flow at the streamgage when flows peak and recede rapidly. However, GBMM results indicate that groundwater recession, which affects the receding limb of the hydrograph, was more difficult to estimate with the spatially explicit curve number approach. Although the purpose of this report is not to directly compare both models, given the characteristics of the McTier Creek watershed and the fact that GBMM uses the spatially explicit curve number approach as compared to the variable-source-area concept in TOPMODEL, GBMM was able to capture the flow characteristics reasonably well.

  6. Estimation of computed tomography dose index in cone beam computed tomography: MOSFET measurements and Monte Carlo simulations.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry; Toncheva, Greta; Yoo, Sua; Yin, Fang-Fang; Frush, Donald

    2010-05-01

    To address the lack of accurate dose estimation method in cone beam computed tomography (CBCT), we performed point dose metal oxide semiconductor field-effect transistor (MOSFET) measurements and Monte Carlo (MC) simulations. A Varian On-Board Imager (OBI) was employed to measure point doses in the polymethyl methacrylate (PMMA) CT phantoms with MOSFETs for standard and low dose modes. A MC model of the OBI x-ray tube was developed using BEAMnrc/EGSnrc MC system and validated by the half value layer, x-ray spectrum and lateral and depth dose profiles. We compared the weighted computed tomography dose index (CTDIw) between MOSFET measurements and MC simulations. The CTDIw was found to be 8.39 cGy for the head scan and 4.58 cGy for the body scan from the MOSFET measurements in standard dose mode, and 1.89 cGy for the head and 1.11 cGy for the body in low dose mode, respectively. The CTDIw from MC compared well to the MOSFET measurements within 5% differences. In conclusion, a MC model for Varian CBCT has been established and this approach may be easily extended from the CBCT geometry to multi-detector CT geometry.

  7. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  8. Monte Carlo simulations of neutron-scattering instruments using McStas

    NASA Astrophysics Data System (ADS)

    Nielsen, K.; Lefmann, K.

    2000-06-01

    Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less

  10. On the definition of a Monte Carlo model for binary crystal growth.

    PubMed

    Los, J H; van Enckevort, W J P; Meekes, H; Vlieg, E

    2007-02-01

    We show that consistency of the transition probabilities in a lattice Monte Carlo (MC) model for binary crystal growth with the thermodynamic properties of a system does not guarantee the MC simulations near equilibrium to be in agreement with the thermodynamic equilibrium phase diagram for that system. The deviations remain small for systems with small bond energies, but they can increase significantly for systems with large melting entropy, typical for molecular systems. These deviations are attributed to the surface kinetics, which is responsible for a metastable zone below the liquidus line where no growth occurs, even in the absence of a 2D nucleation barrier. Here we propose an extension of the MC model that introduces a freedom of choice in the transition probabilities while staying within the thermodynamic constraints. This freedom can be used to eliminate the discrepancy between the MC simulations and the thermodynamic equilibrium phase diagram. Agreement is achieved for that choice of the transition probabilities yielding the fastest decrease of the free energy (i.e., largest growth rate) of the system at a temperature slightly below the equilibrium temperature. An analytical model is developed, which reproduces quite well the MC results, enabling a straightforward determination of the optimal set of transition probabilities. Application of both the MC and analytical model to conditions well away from equilibrium, giving rise to kinetic phase diagrams, shows that the effect of kinetics on segregation is even stronger than that predicted by previous models.

  11. In-simulator training of driving abilities in a person with a traumatic brain injury.

    PubMed

    Gamache, Pierre-Luc; Lavallière, Martin; Tremblay, Mathieu; Simoneau, Martin; Teasdale, Normand

    2011-01-01

    This study reports the case of a 23-year-old woman (MC) who sustained a severe traumatic brain injury in 2004. After her accident, her driving license was revoked. Despite recovering normal neuropsychological functions in the following years, MC was unable to renew her license, failing four on-road evaluations assessing her fitness to drive. In hope of an eventual license renewal, MC went through an in-simulator training programme in the laboratory in 2009. The training programme aimed at improving features of MC's driving behaviour that were identified as being problematic in prior on-road evaluations. To do so, proper driving behaviour was reinforced via driving-specific feedback provided during the training sessions. After 25 sessions in the simulator (over a period of 4 months), MC significantly improved various components of her driving. Notably, compared to early sessions, later ones were associated with a reduced cognitive load, less jerky speed profiles when stopping at intersections and better vehicle control and positioning. A 1-year retention test showed most of these improvements were consistent. The learning principles underlying well conducted simulator-based education programmes have a strong scientific basis. A simulator training programme like this one represents a promising avenue for driving rehabilitation. It allows individuals without a driving license to practice and improve their skills in a safe and realistic environment.

  12. Enhanced Master Controller Unit Tester

    NASA Technical Reports Server (NTRS)

    Benson, Patricia; Johnson, Yvette; Johnson, Brian; Williams, Philip; Burton, Geoffrey; McCoy, Anthony

    2007-01-01

    The Enhanced Master Controller Unit Tester (EMUT) software is a tool for development and testing of software for a master controller (MC) flight computer. The primary function of the EMUT software is to simulate interfaces between the MC computer and external analog and digital circuitry (including other computers) in a rack of equipment to be used in scientific experiments. The simulations span the range of nominal, off-nominal, and erroneous operational conditions, enabling the testing of MC software before all the equipment becomes available.

  13. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations

    NASA Astrophysics Data System (ADS)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-10-01

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  14. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations.

    PubMed

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-09-12

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  15. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  16. SU-C-204-01: A Fast Analytical Approach for Prompt Gamma and PET Predictions in a TPS for Proton Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, K; Herzog, M; Landry, G

    2015-06-15

    Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less

  17. SU-E-T-491: Importance of Energy Dependent Protons Per MU Calibration Factors in IMPT Dose Calculations Using Monte Carlo Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randeniya, S; Mirkovic, D; Titt, U

    2014-06-01

    Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Ofmore » the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations. Research supported by National Cancer Institute grant P01CA021239.« less

  18. Charge Structure and Counterion Distribution in Hexagonal DNA Liquid Crystal

    PubMed Central

    Dai, Liang; Mu, Yuguang; Nordenskiöld, Lars; Lapp, Alain; van der Maarel, Johan R. C.

    2007-01-01

    A hexagonal liquid crystal of DNA fragments (double-stranded, 150 basepairs) with tetramethylammonium (TMA) counterions was investigated with small angle neutron scattering (SANS). We obtained the structure factors pertaining to the DNA and counterion density correlations with contrast matching in the water. Molecular dynamics (MD) computer simulation of a hexagonal assembly of nine DNA molecules showed that the inter-DNA distance fluctuates with a correlation time around 2 ns and a standard deviation of 8.5% of the interaxial spacing. The MD simulation also showed a minimal effect of the fluctuations in inter-DNA distance on the radial counterion density profile and significant penetration of the grooves by TMA. The radial density profile of the counterions was also obtained from a Monte Carlo (MC) computer simulation of a hexagonal array of charged rods with fixed interaxial spacing. Strong ordering of the counterions between the DNA molecules and the absence of charge fluctuations at longer wavelengths was shown by the SANS number and charge structure factors. The DNA-counterion and counterion structure factors are interpreted with the correlation functions derived from the Poisson-Boltzmann equation, MD, and MC simulation. Best agreement is observed between the experimental structure factors and the prediction based on the Poisson-Boltzmann equation and/or MC simulation. The SANS results show that TMA is too large to penetrate the grooves to a significant extent, in contrast to what is shown by MD simulation. PMID:17098791

  19. TiOx deposited by magnetron sputtering: a joint modelling and experimental study

    NASA Astrophysics Data System (ADS)

    Tonneau, R.; Moskovkin, P.; Pflug, A.; Lucas, S.

    2018-05-01

    This paper presents a 3D multiscale simulation approach to model magnetron reactive sputter deposition of TiOx⩽2 at various O2 inlets and its validation against experimental results. The simulation first involves the transport of sputtered material in a vacuum chamber by means of a three-dimensional direct simulation Monte Carlo (DSMC) technique. Second, the film growth at different positions on a 3D substrate is simulated using a kinetic Monte Carlo (kMC) method. When simulating the transport of species in the chamber, wall chemistry reactions are taken into account in order to get the proper content of the reactive species in the volume. Angular and energy distributions of particles are extracted from DSMC and used for film growth modelling by kMC. Along with the simulation, experimental deposition of TiOx coatings on silicon samples placed at different positions on a curved sample holder was performed. The experimental results are in agreement with the simulated ones. For a given coater, the plasma phase hysteresis behaviour, film composition and film morphology are predicted. The used methodology can be applied to any coater and any films. This paves the way to the elaboration of a virtual coater allowing a user to predict composition and morphology of films deposited in silico.

  20. McStas-model of the delft SESANS

    NASA Astrophysics Data System (ADS)

    Knudsen, E.; Udby, L.; Willendrup, P. K.; Lefmann, K.; Bouwman, W. G.

    2011-06-01

    We present simulation results taking first virtual data from a model of the Spin-Echo Small Angle Scattering (SESANS) instrument situated in Delft, in the framework of the McStas Monte Carlo software package. The main focus has been on making a model of the Delft SESANS instrument, and we can now present the first virtual data from it, using a refracting prism-like sample model. In consequence, polarisation instrumentation is now included natively in the McStas kernel, including options for magnetic fields and a number of utility components. This development has brought us to a point where realistic models of polarisation-enabled instrumentation can be built.

  1. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams.

    PubMed

    Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An

    2017-11-08

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  2. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams

    PubMed Central

    Gao, Lili

    2017-01-01

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096

  3. Advanced Control Algorithms for Compensating the Phase Distortion Due to Transport Delay in Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline simulations with time delay compensation show that both novel predictors effectively suppress the large spikes caused by the McFarland compensator. The phase errors of the three predictors are not significant. The adaptive predictor yields greater gain errors than the McFarland predictor for short delays (96 and 138 ms), but shows smaller errors for long delays (186 and 282 ms). The advantage of the adaptive predictor becomes more obvious for a longer time delay. Conversely, the state space predictor results in substantially smaller gain error than the other two predictors for all four delay cases.

  4. Deformation of the Durom acetabular component and its impact on tribology in a cadaveric model--a simulator study.

    PubMed

    Liu, Feng; Chen, Zhefeng; Gu, Yanqing; Wang, Qing; Cui, Weiding; Fan, Weimin

    2012-01-01

    Recent studies have shown that the acetabular component frequently becomes deformed during press-fit insertion. The aim of this study was to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the Durom large head metal-on-metal (MOM) total hips in simulators. Six Durom cups impacted into reamed acetabula of fresh cadavers were used as the experimental group and another 6 size-paired intact Durom cups constituted the control group. All 12 Durom MOM total hips were put through a 3 million cycle (MC) wear test in simulators. The 6 cups in the experimental group were all deformed, with a mean deformation of 41.78 ± 8.86 µm. The average volumetric wear rate in the experimental group and in the control group in the first million cycle was 6.65 ± 0.29 mm(3)/MC and 0.89 ± 0.04 mm(3)/MC (t = 48.43, p = 0.000). The ion levels of Cr and Co in the experimental group were also higher than those in the control group before 2.0 MC. However there was no difference in the ion levels between 2.0 and 3.0 MC. This finding implies that the non-modular acetabular component of Durom total hip prosthesis is likely to become deformed during press-fit insertion, and that the deformation will result in increased volumetric wear and increased ion release. This study was determined to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the prosthesis. Deformation of the cup after implantation increases the wear of MOM bearings and the resulting ion levels. The clinical use of the Durom large head prosthesis should be with great care.

  5. Development of a NRSE Spectrometer with the Help of McStas - Application to the Design of Present and Future Instruments

    NASA Astrophysics Data System (ADS)

    Kredler, L.; Häußler, W.; Martin, N.; Böni, P.

    The flux is still a major limiting factor in neutron research. For instruments being supplied by cold neutrons using neutron guides, both at present steady-state and at new spallation neutron sources, it is therefore important to optimize the instrumental setup and the neutron guidance. Optimization of neutron guide geometry and of the instrument itself can be performed by numerical ray-tracing simulations using existing open-access codes. In this paper, we discuss how such Monte Carlo simulations have been employed in order to plan improvements of the Neutron Resonant Spin Echo spectrometer RESEDA (FRM II, Germany) as well as the neutron guides before and within the instrument. The essential components have been represented with the help of the McStas ray-tracing package. The expected intensity has been tested by means of several virtual detectors, implemented in the simulation code. Comparison between simulations and preliminary measurements results shows good agreement and demonstrates the reliability of the numerical approach. These results will be taken into account in the planning of new components installed in the guide system.

  6. GATE Monte Carlo simulations for variations of an integrated PET/MR hybrid imaging system based on the Biograph mMR model

    NASA Astrophysics Data System (ADS)

    Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.

    2015-06-01

    A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall accuracy error of less than 10% can now be used for further MC simulation applications such as development of hardware components as well as for testing of new PET/MR software algorithms, such as assessment of point-spread function-based reconstruction algorithms.

  7. A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14

    NASA Astrophysics Data System (ADS)

    Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.

    2000-03-01

    Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.

  8. Monte Carlo simulations in radiotherapy dosimetry.

    PubMed

    Andreo, Pedro

    2018-06-27

    The use of the Monte Carlo (MC) method in radiotherapy dosimetry has increased almost exponentially in the last decades. Its widespread use in the field has converted this computer simulation technique in a common tool for reference and treatment planning dosimetry calculations. This work reviews the different MC calculations made on dosimetric quantities, like stopping-power ratios and perturbation correction factors required for reference ionization chamber dosimetry, as well as the fully realistic MC simulations currently available on clinical accelerators, detectors and patient treatment planning. Issues are raised that include the necessity for consistency in the data throughout the entire dosimetry chain in reference dosimetry, and how Bragg-Gray theory breaks down for small photon fields. Both aspects are less critical for MC treatment planning applications, but there are important constraints like tissue characterization and its patient-to-patient variability, which together with the conversion between dose-to-water and dose-to-tissue, are analysed in detail. Although these constraints are common to all methods and algorithms used in different types of treatment planning systems, they make uncertainties involved in MC treatment planning to still remain "uncertain".

  9. Analysis of dense-medium light scattering with applications to corneal tissue: experiments and Monte Carlo simulations.

    PubMed

    Kim, K B; Shanyfelt, L M; Hahn, D W

    2006-01-01

    Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.

  10. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy

    NASA Astrophysics Data System (ADS)

    Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.

    2010-08-01

    This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.

  11. Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.

  12. OneSAF as an In-Stride Mission Command Asset

    DTIC Science & Technology

    2014-06-01

    implementation approach. While DARPA began with a funded project to complete the capability as a “ big bang ” approach the approach here is based on reuse and...Command (MC), Modeling and Simulation (M&S), Distributed Interactive Simulation (DIS) ABSTRACT: To provide greater interoperability and integration...within Mission Command (MC) Systems the One Semi-Automated Forces (OneSAF) entity level simulation is evolving from a tightly coupled client server

  13. Computer Simulation of Electron Thermalization in CsI and CsI(Tl)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhiguo; Xie, YuLong; Cannon, Bret D.

    2011-09-15

    A Monte Carlo (MC) model was developed and implemented to simulate the thermalization of electrons in inorganic scintillator materials. The model incorporates electron scattering with both longitudinal optical and acoustic phonons. In this paper, the MC model was applied to simulate electron thermalization in CsI, both pure and doped with a range of thallium concentrations. The inclusion of internal electric fields was shown to increase the fraction of recombined electron-hole pairs and to broaden the thermalization distance and thermalization time distributions. The MC simulations indicate that electron thermalization, following {gamma}-ray excitation, takes place within approximately 10 ps in CsI andmore » that electrons can travel distances up to several hundreds of nanometers. Electron thermalization was studied for a range of incident {gamma}-ray energies using electron-hole pair spatial distributions generated by the MC code NWEGRIM (NorthWest Electron and Gamma Ray Interaction in Matter). These simulations revealed that the partition of thermalized electrons between different species (e.g., recombined with self-trapped holes or trapped at thallium sites) vary with the incident energy. Implications for the phenomenon of nonlinearity in scintillator light yield are discussed.« less

  14. Efficient hybrid non-equilibrium molecular dynamics--Monte Carlo simulations with symmetric momentum reversal.

    PubMed

    Chen, Yunjie; Roux, Benoît

    2014-09-21

    Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.

  15. Efficient hybrid non-equilibrium molecular dynamics - Monte Carlo simulations with symmetric momentum reversal

    NASA Astrophysics Data System (ADS)

    Chen, Yunjie; Roux, Benoît

    2014-09-01

    Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.

  16. Correction for human head motion in helical x-ray CT

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Sun, T.; Alcheikh, A. R.; Kuncic, Z.; Nuyts, J.; Fulton, R.

    2016-02-01

    Correction for rigid object motion in helical CT can be achieved by reconstructing from a modified source-detector orbit, determined by the object motion during the scan. This ensures that all projections are consistent, but it does not guarantee that the projections are complete in the sense of being sufficient for exact reconstruction. We have previously shown with phantom measurements that motion-corrected helical CT scans can suffer from data-insufficiency, in particular for severe motions and at high pitch. To study whether such data-insufficiency artefacts could also affect the motion-corrected CT images of patients undergoing head CT scans, we used an optical motion tracking system to record the head movements of 10 healthy volunteers while they executed each of the 4 different types of motion (‘no’, slight, moderate and severe) for 60 s. From these data we simulated 354 motion-affected CT scans of a voxelized human head phantom and reconstructed them with and without motion correction. For each simulation, motion-corrected (MC) images were compared with the motion-free reference, by visual inspection and with quantitative similarity metrics. Motion correction improved similarity metrics in all simulations. Of the 270 simulations performed with moderate or less motion, only 2 resulted in visible residual artefacts in the MC images. The maximum range of motion in these simulations would encompass that encountered in the vast majority of clinical scans. With severe motion, residual artefacts were observed in about 60% of the simulations. We also evaluated a new method of mapping local data sufficiency based on the degree to which Tuy’s condition is locally satisfied, and observed that areas with high Tuy values corresponded to the locations of residual artefacts in the MC images. We conclude that our method can provide accurate and artefact-free MC images with most types of head motion likely to be encountered in CT imaging, provided that the motion can be accurately determined.

  17. Metabolite-cycled STEAM and semi-LASER localization for MR spectroscopy of the human brain at 9.4T.

    PubMed

    Giapitzakis, Ioannis-Angelos; Shao, Tingting; Avdievich, Nikolai; Mekle, Ralf; Kreis, Roland; Henning, Anke

    2018-04-01

    Metabolite cycling (MC) is an MRS technique for the simultaneous acquisition of water and metabolite spectra that avoids chemical exchange saturation transfer effects and for which water may serve as a reference signal or contain additional information in functional or diffusion studies. Here, MC was developed for human investigations at ultrahigh field. MC-STEAM and MC-semi-LASER are introduced at 9.4T with an optimized inversion pulse and elaborate coil setup. Experimental and simulation results are given for the implementation of adiabatic inversion pulses for MC. The two techniques are compared, and the effect of frequency and phase correction based on the MC water spectra is evaluated. Finally, absolute quantification of metabolites is performed. The proposed coil configuration results in a maximum B1 + of 48 μΤ in a voxel within the occipital lobe. Frequency and phase correction of single acquisitions improve signal-to-noise ratio (SNR) and linewidth, leading to high-resolution spectra. The improvement of SNR of N-acetylaspartate (SNR NAA ) for frequency aligned data, acquired with MC-STEAM and MC-semi-LASER, are 37% and 30%, respectively (P < 0.05). Moreover, a doubling of the SNR NAA for MC-semi-LASER in comparison with MC-STEAM is observed (P < 0.05). Concentration levels for 18 metabolites from the human occipital lobe are reported, as acquired with both MC-STEAM and MC-semi-LASER. This work introduces a novel methodology for single-voxel MRS on a 9.4T whole-body scanner and highlights the advantages of semi-LASER compared to STEAM in terms of excitation profile. In comparison with MC-STEAM, MC-semi-LASER yields spectra with higher SNR. Magn Reson Med 79:1841-1850, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Simulation of water removal process and optimization of aeration strategy in sewage sludge composting.

    PubMed

    Zhou, Hai-Bin; Chen, Tong-Bin; Gao, Ding; Zheng, Guo-Di; Chen, Jun; Pan, Tian-Hao; Liu, Hong-Tao; Gu, Run-Yao

    2014-11-01

    Reducing moisture in sewage sludge is one of the main goals of sewage sludge composting and biodrying. A mathematical model was used to simulate the performance of water removal under different aeration strategies. Additionally, the correlations between temperature, moisture content (MC), volatile solids (VS), oxygen content (OC), and ambient air temperature and aeration strategies were predicted. The mathematical model was verified based on coefficients of correlation between the measured and predicted results of over 0.80 for OC, MC, and VS, and 0.72 for temperature. The results of the simulation showed that water reduction was enhanced when the average aeration rate (AR) increased to 15.37 m(3) min(-1) (6/34 min/min, AR: 102.46 m(3) min(-1)), above which no further increase was observed. Furthermore, more water was removed under a higher on/off time of 7/33 (min/min, AR: 87.34 m(3) min(-1)), and when ambient air temperature was higher. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Simulations and experiments on RITA-2 at PSI

    NASA Astrophysics Data System (ADS)

    Klausen, S. N.; Lefmann, K.; McMorrow, D. F.; Altorfer, F.; Janssen, S.; Lüthy, M.

    The cold-neutron triple-axis spectrometer RITA-2 designed and built at Riso National Laboratory was installed at the neutron source SINQ at Paul Scherrer Institute in April/May 2001. In connection with the installation of RITA-2, computer simulations were performed using the neutron ray-tracing package McStas. The simulation results are compared to real experimental results obtained with a powder sample. Especially, the flux at the sample position and the resolution function of the spectrometer are investigated.

  20. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  1. Proton-induced x-ray fluorescence CT imaging

    PubMed Central

    Bazalova-Carter, Magdalena; Ahmad, Moiz; Matsuura, Taeko; Takao, Seishin; Matsuo, Yuto; Fahrig, Rebecca; Shirato, Hiroki; Umegaki, Kikuo; Xing, Lei

    2015-01-01

    Purpose: To demonstrate the feasibility of proton-induced x-ray fluorescence CT (pXFCT) imaging of gold in a small animal sized object by means of experiments and Monte Carlo (MC) simulations. Methods: First, proton-induced gold x-ray fluorescence (pXRF) was measured as a function of gold concentration. Vials of 2.2 cm in diameter filled with 0%–5% Au solutions were irradiated with a 220 MeV proton beam and x-ray fluorescence induced by the interaction of protons, and Au was detected with a 3 × 3 mm2 CdTe detector placed at 90° with respect to the incident proton beam at a distance of 45 cm from the vials. Second, a 7-cm diameter water phantom containing three 2.2-diameter vials with 3%–5% Au solutions was imaged with a 7-mm FWHM 220 MeV proton beam in a first generation CT scanning geometry. X-rays scattered perpendicular to the incident proton beam were acquired with the CdTe detector placed at 45 cm from the phantom positioned on a translation/rotation stage. Twenty one translational steps spaced by 3 mm at each of 36 projection angles spaced by 10° were acquired, and pXFCT images of the phantom were reconstructed with filtered back projection. A simplified geometry of the experimental data acquisition setup was modeled with the MC TOPAS code, and simulation results were compared to the experimental data. Results: A linear relationship between gold pXRF and gold concentration was observed in both experimental and MC simulation data (R2 > 0.99). All Au vials were apparent in the experimental and simulated pXFCT images. Specifically, the 3% Au vial was detectable in the experimental [contrast-to-noise ratio (CNR) = 5.8] and simulated (CNR = 11.5) pXFCT image. Due to fluorescence x-ray attenuation in the higher concentration vials, the 4% and 5% Au contrast were underestimated by 10% and 15%, respectively, in both the experimental and simulated pXFCT images. Conclusions: Proton-induced x-ray fluorescence CT imaging of 3%–5% gold solutions in a small animal sized water phantom has been demonstrated for the first time by means of experiments and MC simulations. PMID:25652502

  2. Comparison of TOPMODEL streamflow simulations using NEXRAD-based and measured rainfall data, McTier Creek watershed, South Carolina

    USGS Publications Warehouse

    Feaster, Toby D.; Westcott, Nancy E.; Hudson, Robert J.M.; Conrads, Paul; Bradley, Paul M.

    2012-01-01

    Rainfall is an important forcing function in most watershed models. As part of a previous investigation to assess interactions among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations in the Edisto River Basin, the topography-based hydrological model (TOPMODEL) was applied in the McTier Creek watershed in Aiken County, South Carolina. Measured rainfall data from six National Weather Service (NWS) Cooperative (COOP) stations surrounding the McTier Creek watershed were used to calibrate the McTier Creek TOPMODEL. Since the 1990s, the next generation weather radar (NEXRAD) has provided rainfall estimates at a finer spatial and temporal resolution than the NWS COOP network. For this investigation, NEXRAD-based rainfall data were generated at the NWS COOP stations and compared with measured rainfall data for the period June 13, 2007, to September 30, 2009. Likewise, these NEXRAD-based rainfall data were used with TOPMODEL to simulate streamflow in the McTier Creek watershed and then compared with the simulations made using measured rainfall data. NEXRAD-based rainfall data for non-zero rainfall days were lower than measured rainfall data at all six NWS COOP locations. The total number of concurrent days for which both measured and NEXRAD-based data were available at the COOP stations ranged from 501 to 833, the number of non-zero days ranged from 139 to 209, and the total difference in rainfall ranged from -1.3 to -21.6 inches. With the calibrated TOPMODEL, simulations using NEXRAD-based rainfall data and those using measured rainfall data produce similar results with respect to matching the timing and shape of the hydrographs. Comparison of the bias, which is the mean of the residuals between observed and simulated streamflow, however, reveals that simulations using NEXRAD-based rainfall tended to underpredict streamflow overall. Given that the total NEXRAD-based rainfall data for the simulation period is lower than the total measured rainfall at the NWS COOP locations, this bias would be expected. Therefore, to better assess the use of NEXRAD-based rainfall estimates as compared to NWS COOP rainfall data on the hydrologic simulations, TOPMODEL was recalibrated and updated simulations were made using the NEXRAD-based rainfall data. Comparisons of observed and simulated streamflow show that the TOPMODEL results using measured rainfall data and NEXRAD-based rainfall are comparable. Nonetheless, TOPMODEL simulations using NEXRAD-based rainfall still tended to underpredict total streamflow volume, although the magnitude of differences were similar to the simulations using measured rainfall. The McTier Creek watershed was subdivided into 12 subwatersheds and NEXRAD-based rainfall data were generated for each subwatershed. Simulations of streamflow were generated for each subwatershed using NEXRAD-based rainfall and compared with subwatershed simulations using measured rainfall data, which unlike the NEXRAD-based rainfall were the same data for all subwatersheds (derived from a weighted average of the six NWS COOP stations surrounding the basin). For the two simulations, subwatershed streamflow were summed and compared to streamflow simulations at two U.S. Geological Survey streamgages. The percentage differences at the gage near Monetta, South Carolina, were the same for simulations using measured rainfall data and NEXRAD-based rainfall. At the gage near New Holland, South Carolina, the percentage differences using the NEXRAD-based rainfall were twice as much as those using the measured rainfall. Single-mass curve comparisons showed an increase in the total volume of rainfall from north to south. Similar comparisons of the measured rainfall at the NWS COOP stations showed similar percentage differences, but the NEXRAD-based rainfall variations occurred over a much smaller distance than the measured rainfall. Nonetheless, it was concluded that in some cases, using NEXRAD-based rainfall data in TOPMODEL streamflow simulations may provide an effective alternative to using measured rainfall data. For this investigation, however, TOPMODEL streamflow simulations using NEXRAD-based rainfall data for both calibration and simulations did not show significant improvements with respect to matching observed streamflow over simulations generated using measured rainfall data.

  3. Magnetohydrodynamic simulation of the interaction between two interplanetary magnetic clouds and its consequent geoeffectiveness

    NASA Astrophysics Data System (ADS)

    Xiong, Ming; Zheng, Huinan; Wu, S. T.; Wang, Yuming; Wang, Shui

    2007-11-01

    Numerical studies of the interplanetary "multiple magnetic clouds (Multi-MC)" are performed by a 2.5-dimensional ideal magnetohydrodynamic (MHD) model in the heliospheric meridional plane. Both slow MC1 and fast MC2 are initially emerged along the heliospheric equator, one after another with different time intervals. The coupling of two MCs could be considered as the comprehensive interaction between two systems, each comprising of an MC body and its driven shock. The MC2-driven shock and MC2 body are successively involved into interaction with MC1 body. The momentum is transferred from MC2 to MC1. After the passage of MC2-driven shock front, magnetic field lines in MC1 medium previously compressed by MC2-driven shock are prevented from being restored by the MC2 body pushing. MC1 body undergoes the most violent compression from the ambient solar wind ahead, continuous penetration of MC2-driven shock through MC1 body, and persistent pushing of MC2 body at MC1 tail boundary. As the evolution proceeds, the MC1 body suffers from larger and larger compression, and its original vulnerable magnetic elasticity becomes stiffer and stiffer. So there exists a maximum compressibility of Multi-MC when the accumulated elasticity can balance the external compression. This cutoff limit of compressibility mainly decides the maximally available geoeffectiveness of Multi-MC because the geoeffectiveness enhancement of MCs interacting is ascribed to the compression. Particularly, the greatest geoeffectiveness is excited among all combinations of each MC helicity, if magnetic field lines in the interacting region of Multi-MC are all southward. Multi-MC completes its final evolutionary stage when the MC2-driven shock is merged with MC1-driven shock into a stronger compound shock. With respect to Multi-MC geoeffectiveness, the evolution stage is a dominant factor, whereas the collision intensity is a subordinate one. The magnetic elasticity, magnetic helicity of each MC, and compression between each other are the key physical factors for the formation, propagation, evolution, and resulting geoeffectiveness of interplanetary Multi-MC.

  4. Simulating the response of natural ecosystems and their fire regimes to climatic variability in Alaska.

    Treesearch

    D. Bachelet; J. Lenihan; R. Neilson; R. Drapek; T. Kittel

    2005-01-01

    The dynamic global vegetation model MC1 was used to examine climate, fire, and ecosystems interactions in Alaska under historical (1922-1996) and future (1997-2100) climate conditions. Projections show that by the end of the 21st century, 75%-90% of the area simulated as tundra in 1922 is replaced by boreal and temperate forest. From 1922 to 1996, simulation results...

  5. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.

  6. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  7. Properties of a planar electric double layer under extreme conditions investigated by classical density functional theory and Monte Carlo simulations.

    PubMed

    Zhou, Shiqi; Lamperski, Stanisław; Zydorczak, Maria

    2014-08-14

    Monte Carlo (MC) simulation and classical density functional theory (DFT) results are reported for the structural and electrostatic properties of a planar electric double layer containing ions having highly asymmetric diameters or valencies under extreme concentration condition. In the applied DFT, for the excess free energy contribution due to the hard sphere repulsion, a recently elaborated extended form of the fundamental measure functional is used, and coupling of Coulombic and short range hard-sphere repulsion is described by a traditional second-order functional perturbation expansion approximation. Comparison between the MC and DFT results indicates that validity interval of the traditional DFT approximation expands to high ion valences running up to 3 and size asymmetry high up to diameter ratio of 4 whether the high valence ions or the large size ion are co- or counter-ions; and to a high bulk electrolyte concentration being close to the upper limit of the electrolyte mole concentration the MC simulation can deal with well. The DFT accuracy dependence on the ion parameters can be self-consistently explained using arguments of liquid state theory, and new EDL phenomena such as overscreening effect due to monovalent counter-ions, extreme layering effect of counter-ions, and appearance of a depletion layer with almost no counter- and co-ions are observed.

  8. Precipitation of energetic neutral atoms and induced non-thermal escape fluxes from the Martian atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewkow, N. R.; Kharchenko, V.

    2014-08-01

    The precipitation of energetic neutral atoms, produced through charge exchange collisions between solar wind ions and thermal atmospheric gases, is investigated for the Martian atmosphere. Connections between parameters of precipitating fast ions and resulting escape fluxes, altitude-dependent energy distributions of fast atoms and their coefficients of reflection from the Mars atmosphere, are established using accurate cross sections in Monte Carlo (MC) simulations. Distributions of secondary hot (SH) atoms and molecules, induced by precipitating particles, have been obtained and applied for computations of the non-thermal escape fluxes. A new collisional database on accurate energy-angular-dependent cross sections, required for description of themore » energy-momentum transfer in collisions of precipitating particles and production of non-thermal atmospheric atoms and molecules, is reported with analytic fitting equations. Three-dimensional MC simulations with accurate energy-angular-dependent cross sections have been carried out to track large ensembles of energetic atoms in a time-dependent manner as they propagate into the Martian atmosphere and transfer their energy to the ambient atoms and molecules. Results of the MC simulations on the energy-deposition altitude profiles, reflection coefficients, and time-dependent atmospheric heating, obtained for the isotropic hard sphere and anisotropic quantum cross sections, are compared. Atmospheric heating rates, thermalization depths, altitude profiles of production rates, energy distributions of SH atoms and molecules, and induced escape fluxes have been determined.« less

  9. Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Threemore » of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.« less

  10. Solar proton exposure of an ICRU sphere within a complex structure Part I: Combinatorial geometry.

    PubMed

    Wilson, John W; Slaba, Tony C; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A

    2016-06-01

    The 3DHZETRN code, with improved neutron and light ion (Z≤2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency. Published by Elsevier Ltd.

  11. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  12. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    PubMed

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  13. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  14. Subtle Monte Carlo Updates in Dense Molecular Systems.

    PubMed

    Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper

    2012-02-14

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.

  15. McStas 1.7 - a new version of the flexible Monte Carlo neutron scattering package

    NASA Astrophysics Data System (ADS)

    Willendrup, Peter; Farhi, Emmanuel; Lefmann, Kim

    2004-07-01

    Current neutron instrumentation is both complex and expensive, and accurate simulation has become essential both for building new instruments and for using them effectively. The McStas neutron ray-trace simulation package is a versatile tool for producing such simulations, developed in collaboration between Risø and ILL. The new version (1.7) has many improvements, among these added support for the popular Microsoft Windows platform. This presentation will demonstrate a selection of the new features through a simulation of the ILL IN6 beamline.

  16. Groundwater availability in the Crouch Branch and McQueen Branch aquifers, Chesterfield County, South Carolina, 1900-2012

    USGS Publications Warehouse

    Campbell, Bruce G.; Landmeyer, James E.

    2014-01-01

    Chesterfield County is located in the northeastern part of South Carolina along the southern border of North Carolina and is primarily underlain by unconsolidated sediments of Late Cretaceous age and younger of the Atlantic Coastal Plain. Approximately 20 percent of Chesterfield County is in the Piedmont Physiographic Province, and this area of the county is not included in this study. These Atlantic Coastal Plain sediments compose two productive aquifers: the Crouch Branch aquifer that is present at land surface across most of the county and the deeper, semi-confined McQueen Branch aquifer. Most of the potable water supplied to residents of Chesterfield County is produced from the Crouch Branch and McQueen Branch aquifers by a well field located near McBee, South Carolina, in the southwestern part of the county. Overall, groundwater availability is good to very good in most of Chesterfield County, especially the area around and to the south of McBee, South Carolina. The eastern part of Chesterfield County does not have as abundant groundwater resources but resources are generally adequate for domestic purposes. The primary purpose of this study was to determine groundwater-flow rates, flow directions, and changes in water budgets over time for the Crouch Branch and McQueen Branch aquifers in the Chesterfield County area. This goal was accomplished by using the U.S. Geological Survey finite-difference MODFLOW groundwater-flow code to construct and calibrate a groundwater-flow model of the Atlantic Coastal Plain of Chesterfield County. The model was created with a uniform grid size of 300 by 300 feet to facilitate a more accurate simulation of groundwater-surface-water interactions. The model consists of 617 rows from north to south extending about 35 miles and 884 columns from west to east extending about 50 miles, yielding a total area of about 1,750 square miles. However, the active part of the modeled area, or the part where groundwater flow is simulated, totaled about 1,117 square miles. Major types of data used as input to the model included groundwater levels, groundwater-use data, and hydrostratigraphic data, along with estimates and measurements of stream base flows made specifically for this study. The groundwater-flow model was calibrated to groundwater-level and stream base-flow conditions from 1900 to 2012 using 39 stress periods. The model was calibrated with an automated parameter-estimation approach using the computer program PEST, and the model used regularized inversion and pilot points. The groundwater-flow model was calibrated using field data that included groundwater levels that had been collected between 1940 and 2012 from 239 wells and base-flow measurements from 44 locations distributed within the study area. To better understand recharge and inter-aquifer interactions, seven wells were equipped with continuous groundwater-level recording equipment during the course of the study, between 2008 and 2012. These water levels were included in the model calibration process. The observed groundwater levels were compared to the simulated ones, and acceptable calibration fits were achieved. Root mean square error for the simulated groundwater levels compared to all observed groundwater levels was 9.3 feet for the Crouch Branch aquifer and 8.6 feet for the McQueen Branch aquifer. The calibrated groundwater-flow model was then used to calculate groundwater budgets for the entire study area and for two sub-areas. The sub-areas are the Alligator Rural Water and Sewer Company well field near McBee, South Carolina, and the Carolina Sandhills National Wildlife Refuge acquisition boundary area. For the overall model area, recharge rates vary from 56 to 1,679 million gallons per day (Mgal/d) with a mean of 737 Mgal/d over the simulation period (1900–2012). The simulated water budget for the streams and rivers varies from 653 to 1,127 Mgal/d with a mean of 944 Mgal/d. The simulated “storage-in term” ranges from 0 to 565 Mgal/d with a mean of 276 Mgal/d. The simulated “storage-out term” has a range of 0 to 552 Mgal/d with a mean of 77 Mgal/d. Groundwater budgets for the McBee, South Carolina, area and the Carolina Sandhills National Wildlife Refuge acquisition area had similar results. An analysis of the effects of past and current groundwater withdrawals on base flows in the McBee area indicated a negligible effect of pumping from the Alligator Rural Water and Sewer well field on local stream base flows. Simulate base flows for 2012 for selected streams in and around the McBee area were similar with and without simulated groundwater withdrawals from the well field. Removing all pumping from the model for the entire simulation period (1900–2012) produces a negligible difference in increased base flow for the selected streams. The 2012 flow for Lower Alligator Creek was 5.04 Mgal/d with the wells pumping and 5.08 Mgal/d without the wells pumping; this represents the largest difference in simulated flows for the six streams.

  17. Feasibility assessment of the interactive use of a Monte Carlo algorithm in treatment planning for intraoperative electron radiation therapy

    NASA Astrophysics Data System (ADS)

    Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés

    2014-12-01

    This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.

  18. SU-E-T-412: Evaluation of Tungsten-Based Functional Paper for Attenuation Device in Intraoperative Radiotherapy for Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamomae, T; Monzen, H; Okudaira, K

    Purpose: Intraoperative radiotherapy (IORT) with an electron beam is one of the accelerated partial breast irradiation methods that have recently been used in early-stage breast cancer. A protective acrylic resin-copper disk is inserted between the breast tissue and the pectoralis muscle to minimize the dose received by the posterior structures. However, a problem with this protective disk is that the surgical incision must be larger than the field size because the disk is manufactured from stiff and unyielding materials. The purpose of this study was to assess the applicability of a new tungsten-based functional paper (TFP) as an alternative tomore » the existing protective disk in IORT. Methods: The newly introduced TFP (Toppan Printing Co., Ltd., Tokyo, JP) is anticipated to become a useful device that is lead-free, light, flexible, and easily processed. The radiation shielding performance of TFP was verified by experimental measurements and Monte Carlo (MC) simulations using PHITS code. The doses transmitted through the protective disk or TFP were measured on a Mobetron mobile accelerator. The same geometries were then reproduced, and the dose distributions were simulated by the MC method. Results: The percentages of transmitted dose relative to the absence of the existing protective disk were lower than 2% in both the measurements and MC simulations. In the experimental measurements, the percentages of transmitted dose for a 9 MeV electron beam were 48.1, 2.3, and 0.6% with TFP thicknesses of 1.9, 3.7, and 7.4 mm, respectively. The percentages for a 12 MeV were 76.0, 49.3, 20.0, and 5.5% with TFP thicknesses of 1.9, 3.7, 7.4, and 14.8 mm, respectively. The results of the MC simulation showed a slight dose increase at the incident surface of the TFP caused by backscattered radiation. Conclusion: The results indicate that a small-incision procedure may be possible by the use of TFP.« less

  19. Schwarzschild-de Sitter spacetimes, McVittie coordinates, and trumpet geometries

    NASA Astrophysics Data System (ADS)

    Dennison, Kenneth A.; Baumgarte, Thomas W.

    2017-12-01

    Trumpet geometries play an important role in numerical simulations of black hole spacetimes, which are usually performed under the assumption of asymptotic flatness. Our Universe is not asymptotically flat, however, which has motivated numerical studies of black holes in asymptotically de Sitter spacetimes. We derive analytical expressions for trumpet geometries in Schwarzschild-de Sitter spacetimes by first generalizing the static maximal trumpet slicing of the Schwarzschild spacetime to static constant mean curvature trumpet slicings of Schwarzschild-de Sitter spacetimes. We then switch to a comoving isotropic radial coordinate which results in a coordinate system analogous to McVittie coordinates. At large distances from the black hole the resulting metric asymptotes to a Friedmann-Lemaître-Robertson-Walker metric with an exponentially-expanding scale factor. While McVittie coordinates have another asymptotically de Sitter end as the radial coordinate goes to zero, so that they generalize the notion of a "wormhole" geometry, our new coordinates approach a horizon-penetrating trumpet geometry in the same limit. Our analytical expressions clarify the role of time-dependence, boundary conditions and coordinate conditions for trumpet slices in a cosmological context, and provide a useful test for black hole simulations in asymptotically de Sitter spacetimes.

  20. Monte Carlo simulations of backscattering process in dislocation-containing SrTiO3 single crystal

    NASA Astrophysics Data System (ADS)

    Jozwik, P.; Sathish, N.; Nowicki, L.; Jagielski, J.; Turos, A.; Kovarik, L.; Arey, B.

    2014-05-01

    Studies of defects formation in crystals are of obvious importance in electronics, nuclear engineering and other disciplines where materials are exposed to different forms of irradiation. Rutherford Backscattering/Channeling (RBS/C) and Monte Carlo (MC) simulations are the most convenient tool for this purpose, as they allow one to determine several features of lattice defects: their type, concentration and damage accumulation kinetic. On the other hand various irradiation conditions can be efficiently modeled by ion irradiation method without leading to the radioactivity of the sample. Combination of ion irradiation with channeling experiment and MC simulations appears thus as a most versatile method in studies of radiation damage in materials. The paper presents the results on such a study performed on SrTiO3 (STO) single crystals irradiated with 320 keV Ar ions. The samples were analyzed also by using HRTEM as a complementary method which enables the measurement of geometrical parameters of crystal lattice deformation in the vicinity of dislocations. Once the parameters and their variations within the distance of several lattice constants from the dislocation core are known, they may be used in MC simulations for the quantitative determination of dislocation depth distribution profiles. The final outcome of the deconvolution procedure are cross-sections values calculated for two types of defects observed (RDA and dislocations).

  1. Atomistic Free Energy Model for Nucleic Acids: Simulations of Single-Stranded DNA and the Entropy Landscape of RNA Stem-Loop Structures.

    PubMed

    Mak, Chi H

    2015-11-25

    While single-stranded (ss) segments of DNAs and RNAs are ubiquitous in biology, details about their structures have only recently begun to emerge. To study ssDNA and RNAs, we have developed a new Monte Carlo (MC) simulation using a free energy model for nucleic acids that has the atomisitic accuracy to capture fine molecular details of the sugar-phosphate backbone. Formulated on the basis of a first-principle calculation of the conformational entropy of the nucleic acid chain, this free energy model correctly reproduced both the long and short length-scale structural properties of ssDNA and RNAs in a rigorous comparison against recent data from fluorescence resonance energy transfer, small-angle X-ray scattering, force spectroscopy and fluorescence correlation transport measurements on sequences up to ∼100 nucleotides long. With this new MC algorithm, we conducted a comprehensive investigation of the entropy landscape of small RNA stem-loop structures. From a simulated ensemble of ∼10(6) equilibrium conformations, the entropy for the initiation of different size RNA hairpin loops was computed and compared against thermodynamic measurements. Starting from seeded hairpin loops, constrained MC simulations were then used to estimate the entropic costs associated with propagation of the stem. The numerical results provide new direct molecular insights into thermodynaimc measurement from macroscopic calorimetry and melting experiments.

  2. STS 51-L crewmembers during training session in flight deck simulation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Shuttle mission simulator (SMS) scene of Astronauts Michael J. Smith, Ellison S. Onizuka, Judith A. Resnik, and Francis R. (Dick) Scobee in their launch and entry positions on the flight deck (46207); Left to right, Backup payload specialist Barbara R. Morgan, Teacher in Space Payload specialist Christa McAuliffe, Hughes Payload specialist Gregory B. Jarvis, and Mission Specialist Ronald E. McNair in the middeck portion of the Shuttle Mission Simulator at JSC (46208).

  3. A Detailed FLUKA-2005 Monte Carlo Simulation for the ATIC Detector

    NASA Technical Reports Server (NTRS)

    Gunasingha, R. M.; Fazely, A. R.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Chang, J.; Christl, M.; Ganel, O.; Guzik, T. G.

    2006-01-01

    We have performed a detailed Monte Carlo (MC) calculation for the Advanced thin Ionization Calorimeter (ATIC) detector using the MC code FLUKA-2005 which is capable of simulating particles up to 10 PeV. The ATIC detector has completed two successful balloon flights from McMurdo, Antarctica lasting a total of more than 35 days. ATIC is designed as a multiple, long duration balloon Bight, investigation of the cosmic ray spectra from below 50 GeV to near 100 TeV total energy; using a fully active Bismuth Germanate @GO) calorimeter. It is equipped with a large mosaic of silicon detector pixels capable of charge identification and as a particle tracking system, three projective layers of x-y scintillator hodoscopes were employed, above, in the middle and below a 0.75 nuclear interaction length graphite target. Our calculations are part of an analysis package of both A- and energy-dependences of different nuclei interacting with the ATIC detector. The MC simulates the responses of different components of the detector such as the Simatrix, the scintillator hodoscopes and the BGO calorimeter to various nuclei. We also show comparisons of the FLUKA-2005 MC calculations with a GEANT calculation and data for protons, He and CNO.

  4. SU-C-204-06: Monte Carlo Dose Calculation for Kilovoltage X-Ray-Psoralen Activated Cancer Therapy (X-PACT): Preliminary Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mein, S; Gunasingha, R; Nolan, M

    Purpose: X-PACT is an experimental cancer therapy where kV x-rays are used to photo-activate anti-cancer therapeutics through phosphor intermediaries (phosphors that absorb x-rays and re-radiate as UV light). Clinical trials in pet dogs are currently underway (NC State College of Veterinary Medicine) and an essential component is the ability to model the kV dose in these dogs. Here we report the commissioning and characterization of a Monte Carlo (MC) treatment planning simulation tool to calculate X-PACT radiation doses in canine trials. Methods: FLUKA multi-particle MC simulation package was used to simulate a standard X-PACT radiation treatment beam of 80kVp withmore » the Varian OBI x-ray source geometry. The beam quality was verified by comparing measured and simulated attenuation of the beam by various thicknesses of aluminum (2–4.6 mm) under narrow beam conditions (HVL). The beam parameters at commissioning were then corroborated using MC, characterized and verified with empirically collected commissioning data, including: percent depth dose curves (PDD), back-scatter factors (BSF), collimator scatter factor(s), and heel effect, etc. All simulations were conducted for N=30M histories at M=100 iterations. Results: HVL and PDD simulation data agreed with an average percent error of 2.42%±0.33 and 6.03%±1.58, respectively. The mean square error (MSE) values for HVL and PDD (0.07% and 0.50%) were low, as expected; however, longer simulations are required to validate convergence to the expected values. Qualitatively, pre- and post-filtration source spectra matched well with 80kVp references generated via SPEKTR software. Further validation of commissioning data simulation is underway in preparation for first-time 3D dose calculations with canine CBCT data. Conclusion: We have prepared a Monte Carlo simulation capable of accurate dose calculation for use with ongoing X-PACT canine clinical trials. Preliminary results show good agreement with measured data and hold promise for accurate quantification of dose for this novel psoralen X-ray therapy. Funding Support, Disclosures, & Conflict of Interest: The Monte Carlo simulation work was not funded; Drs. Adamson & Oldham have received funding from Immunolight LLC for X-PACT research.« less

  5. Coarse kMC-based replica exchange algorithms for the accelerated simulation of protein folding in explicit solvent.

    PubMed

    Peter, Emanuel K; Shea, Joan-Emma; Pivkin, Igor V

    2016-05-14

    In this paper, we present a coarse replica exchange molecular dynamics (REMD) approach, based on kinetic Monte Carlo (kMC). The new development significantly can reduce the amount of replicas and the computational cost needed to enhance sampling in protein simulations. We introduce 2 different methods which primarily differ in the exchange scheme between the parallel ensembles. We apply this approach on folding of 2 different β-stranded peptides: the C-terminal β-hairpin fragment of GB1 and TrpZip4. Additionally, we use the new simulation technique to study the folding of TrpCage, a small fast folding α-helical peptide. Subsequently, we apply the new methodology on conformation changes in signaling of the light-oxygen voltage (LOV) sensitive domain from Avena sativa (AsLOV2). Our results agree well with data reported in the literature. In simulations of dialanine, we compare the statistical sampling of the 2 techniques with conventional REMD and analyze their performance. The new techniques can reduce the computational cost of REMD significantly and can be used in enhanced sampling simulations of biomolecules.

  6. Thermodynamics and simulation of hard-sphere fluid and solid: Kinetic Monte Carlo method versus standard Metropolis scheme

    NASA Astrophysics Data System (ADS)

    Ustinov, E. A.

    2017-01-01

    The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.

  7. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  8. An analytical derivation of MC-SCF vibrational wave functions for the quantum dynamical simulation of multiple proton transfer reactions: Initial application to protonated water chains

    NASA Astrophysics Data System (ADS)

    Drukker, Karen; Hammes-Schiffer, Sharon

    1997-07-01

    This paper presents an analytical derivation of a multiconfigurational self-consistent-field (MC-SCF) solution of the time-independent Schrödinger equation for nuclear motion (i.e. vibrational modes). This variational MC-SCF method is designed for the mixed quantum/classical molecular dynamics simulation of multiple proton transfer reactions, where the transferring protons are treated quantum mechanically while the remaining degrees of freedom are treated classically. This paper presents a proof that the Hellmann-Feynman forces on the classical degrees of freedom are identical to the exact forces (i.e. the Pulay corrections vanish) when this MC-SCF method is used with an appropriate choice of basis functions. This new MC-SCF method is applied to multiple proton transfer in a protonated chain of three hydrogen-bonded water molecules. The ground state and the first three excited state energies and the ground state forces agree well with full configuration interaction calculations. Sample trajectories are obtained using adiabatic molecular dynamics methods, and nonadiabatic effects are found to be insignificant for these sample trajectories. The accuracy of the excited states will enable this MC-SCF method to be used in conjunction with nonadiabatic molecular dynamics methods. This application differs from previous work in that it is a real-time quantum dynamical nonequilibrium simulation of multiple proton transfer in a chain of water molecules.

  9. Characterizing Vegetation Model Skill and Uncertainty in Simulated Ecosystem Response to Climate Change in the United States

    NASA Astrophysics Data System (ADS)

    Drapek, R. J.; Kim, J. B.

    2013-12-01

    We simulated ecosystem response to climate change in the USA and Canada at a 5 arc-minute grid resolution using the MC1 dynamic global vegetation model and nine CMIP3 future climate projections as input. The climate projections were produced by 3 GCMs simulating 3 SRES emissions scenarios. We examined MC1 outputs for the conterminous USA by summarizing them by EPA level II and III ecoregions to characterize model skill and evaluate the magnitude and uncertainties of simulated ecosystem response to climate change. First, we evaluated model skill by comparing outputs from the recent historical period with benchmark datasets. Distribution of potential natural vegetation simulated by MC1 was compared with Kuchler's map. Above ground live carbon simulated by MC1 was compared with the National Biomass and Carbon Dataset. Fire return intervals calculated by MC1 were compared with maximum and minimum values compiled for the United States. Each EPA Level III Ecoregion was scored for average agreement with corresponding benchmark data and an average score was calculated for all three types of output. Greatest agreement with benchmark data happened in the Western Cordillera, the Ozark / Ouachita-Appalachian Forests, and the Southeastern USA Plains (EPA Level II Ecoregions). The lowest agreement happened in the Everglades and the Tamaulipas-Texas Semiarid Plain. For simulated ecosystem response to future climate projections we examined MC1 output for shifts in vegetation type, vegetation carbon, runoff, and biomass consumed by fire. Each ecoregion was scored for the amount of change from historical conditions for each variable and an average score was calculated. Smallest changes were forecast for Western Cordillera and Marine West Coast Forest ecosystems. Largest changes were forecast for the Cold Deserts, the Mixed Wood Plains, and the Central USA Plains. By combining scores of model skill for the historical period for each EPA Level 3 Ecoregion with scores representing the magnitude of ecosystem changes in the future, we identified high and low uncertainty ecoregions. The largest anticipated changes and the lowest measures of model skill coincide in the Central USA Plains and the Mixed Wood Plains. The combination of low model skill and high degree of ecosystem change elevate the importance of our uncertainty in this ecoregion. The highest projected changes coincide with relatively high model skill in the Cold Deserts. Climate adaptation efforts are the most likely to pay off in these regions. Finally, highest model skill and lowest anticipated changes coincide in the Western Cordillera and the Marine West Coast Forests. These regions may be relatively low-risk for climate change impacts when compared to the other ecoregions. These results represent only the first step in this type of analysis; there exist many ways to strengthen it. One, MC1 calibrations can be optimized using a structured optimization technique. Two, a larger set of climate projections can be used to capture a fuller range of GCMs and emissions scenarios. And three, employing an ensemble of vegetation models would make the analysis more robust.

  10. Physics and Computational Methods for X-ray Scatter Estimation and Correction in Cone-Beam Computed Tomography

    NASA Astrophysics Data System (ADS)

    Bootsma, Gregory J.

    X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.

  11. SU-E-J-205: Monte Carlo Modeling of Ultrasound Probes for Real-Time Ultrasound Image-Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hristov, D; Schlosser, J; Bazalova, M

    2014-06-01

    Purpose: To quantify the effect of ultrasound (US) probe beam attenuation for radiation therapy delivered under real-time US image guidance by means of Monte Carlo (MC) simulations. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their CT images in the EGSnrc BEAMnrc and DOSXYZnrc codes. Due to the metal parts, the probes were scanned in a Tomotherapy machine with a 3.5 MV beam. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2–8.0 g/cm{sup 3}.more » Beam attenuation due to the probes was measured in a solid water phantom for a 6 MV and 15 MV 15x15 cm{sup 2} beam delivered on a Varian Trilogy linear accelerator. The dose was measured with the PTW-729 ionization chamber array at two depths and compared to MC simulations. The extreme case beam attenuation expected in robotic US image guided radiotherapy for probes in upright position was quantified by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities were 4.6 and 4.2 g/cm{sup 3} in the C5-2 and X6-1 probe, respectively. Gamma analysis of the simulated and measured doses revealed that over 98% of measurement points passed the 3%/3mm criteria for both probes and measurement depths. The extreme attenuation for probes in upright position was found to be 25% and 31% for the C5-2 and X6-1 probe, respectively, for both 6 and 15 MV beams at 10 cm depth. Conclusion: MC models of two US probes used for real-time image guidance during radiotherapy have been built. As a Result, radiotherapy treatment planning with the imaging probes in place can now be performed. J Schlosser is an employee of SoniTrack Systems, Inc. D Hristov has financial interest in SoniTrack Systems, Inc.« less

  12. Photocarrier Radiometry for Non-contact Evaluation of Monocrystalline Silicon Solar Cell Under Low-Energy (< 200 keV) Proton Irradiation

    NASA Astrophysics Data System (ADS)

    Oliullah, Md.; Liu, J. Y.; Song, P.; Wang, Y.

    2018-06-01

    A three-layer theoretical model is developed for the characterization of the electronic transport properties (lifetime τ, diffusion coefficient D, and surface recombination velocity s) with energetic particle irradiation on solar cells using non-contact photocarrier radiometry. Monte Carlo (MC) simulation is carried out to obtain the depth profiles of the proton irradiation layer at different low energies (< 200 keV). The monocrystalline silicon (c-Si) solar cells are investigated under different low-energy proton irradiation, and the carrier transport parameters of the three layers are obtained by best-fitting of the experimental results. The results show that the low-energy protons have little influence on the transport parameters of the non-irradiated layer, but high influences on both of the p and n-region irradiation layers which are consisted of MC simulation.

  13. Toward GPGPU accelerated human electromechanical cardiac simulations

    PubMed Central

    Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David

    2014-01-01

    In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492

  14. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) of the rare earth elements (REEs) in beneficiation rare earth waste from the gold processing: case study

    NASA Astrophysics Data System (ADS)

    Bieda, Bogusław; Grzesik, Katarzyna

    2017-11-01

    The study proposes an stochastic approach based on Monte Carlo (MC) simulation for life cycle assessment (LCA) method limited to life cycle inventory (LCI) study for rare earth elements (REEs) recovery from the secondary materials processes production applied to the New Krankberg Mine in Sweden. The MC method is recognizes as an important tool in science and can be considered the most effective quantification approach for uncertainties. The use of stochastic approach helps to characterize the uncertainties better than deterministic method. Uncertainty of data can be expressed through a definition of probability distribution of that data (e.g. through standard deviation or variance). The data used in this study are obtained from: (i) site-specific measured or calculated data, (ii) values based on literature, (iii) the ecoinvent process "rare earth concentrate, 70% REO, from bastnäsite, at beneficiation". Environmental emissions (e.g, particulates, uranium-238, thorium-232), energy and REE (La, Ce, Nd, Pr, Sm, Dy, Eu, Tb, Y, Sc, Yb, Lu, Tm, Y, Gd) have been inventoried. The study is based on a reference case for the year 2016. The combination of MC analysis with sensitivity analysis is the best solution for quantified the uncertainty in the LCI/LCA. The reliability of LCA results may be uncertain, to a certain degree, but this uncertainty can be noticed with the help of MC method.

  15. The High performance of nanocrystalline CVD diamond coated hip joints in wear simulator test.

    PubMed

    Maru, M M; Amaral, M; Rodrigues, S P; Santos, R; Gouvea, C P; Archanjo, B S; Trommer, R M; Oliveira, F J; Silva, R F; Achete, C A

    2015-09-01

    The superior biotribological performance of nanocrystalline diamond (NCD) coatings grown by a chemical vapor deposition (CVD) method was already shown to demonstrate high wear resistance in ball on plate experiments under physiological liquid lubrication. However, tests with a close-to-real approach were missing and this constitutes the aim of the present work. Hip joint wear simulator tests were performed with cups and heads made of silicon nitride coated with NCD of ~10 μm in thickness. Five million testing cycles (Mc) were run, which represent nearly five years of hip joint implant activity in a patient. For the wear analysis, gravimetry, profilometry, scanning electron microscopy and Raman spectroscopy techniques were used. After 0.5 Mc of wear test, truncation of the protruded regions of the NCD film happened as a result of a fine-scale abrasive wear mechanism, evolving to extensive plateau regions and highly polished surface condition (Ra<10nm). Such surface modification took place without any catastrophic features as cracking, grain pullouts or delamination of the coatings. A steady state volumetric wear rate of 0.02 mm(3)/Mc, equivalent to a linear wear of 0.27 μm/Mc favorably compares with the best performance reported in the literature for the fourth generation alumina ceramic (0.05 mm(3)/Mc). Also, squeaking, quite common phenomenon in hard-on-hard systems, was absent in the present all-NCD system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  17. Electrons to Reactors Multiscale Modeling: Catalytic CO Oxidation over RuO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, Jonathan E.; Lorenzi, Juan M.; Krogel, Jaron T.

    First-principles kinetic Monte Carlo (1p-kMC) simulations for CO oxidation on two RuO 2 facets, RuO 2(110) and RuO 2(111), were coupled to the computational fluid dynamics (CFD) simulations package MFIX, and reactor-scale simulations were then performed. 1p-kMC coupled with CFD has recently been shown as a feasible method for translating molecular scale mechanistic knowledge to the reactor scale, enabling comparisons to in situ and online experimental measurements. Only a few studies with such coupling have been published. This work incorporates multiple catalytic surface facets into the scale-coupled simulation, and three possibilities were investigated: the two possibilities of each facet individuallymore » being the dominant phase in the reactor, and also the possibility that both facets were present on the catalyst particles in the ratio predicted by an ab initio thermodynamics-based Wulff construction. When lateral interactions between adsorbates were included in the 1p-kMC simulations, the two surfaces, RuO 2(110) and RuO 2(111), were found to be of similar order-of-magnitude in activity for the pressure range of 1 × 10 –4 bar to 1 bar, with the RuO 2(110) surface-termination showing more simulated activity than the RuO 2(111) surface-termination. Coupling between the 1p-kMC and CFD was achieved with a lookup table generated by the error-based modified Shepard interpolation scheme. Isothermal reactor scale simulations were performed and compared to two separate experimental studies, conducted with reactant partial pressures of ≤0.1 bar. Simulations without an isothermality restriction were also conducted and showed that the simulated temperature gradient across the catalytic reactor bed is <0.5 K, which validated the use of the isothermality restriction for investigating the reactor-scale phenomenological temperature dependences. The approach with the Wulff construction based reactor simulations reproduced a trend similar to one experimental data set relatively well, with the (110) surface being more active at higher temperaures; in contrast, for the other experimental data set, our reactor simulations achieve surprisingly and perhaps fortuitously good agreement with the activity and phenomenological pressure dependence when it is assumed that the (111) facet is the only active facet present. Lastly, the active phase of catalytic CO oxidation over RuO 2 remains unsettled, but the present study presents proof of principle (and progress) toward more accurate multiscale modeling from electrons to reactors and new simulation results.« less

  18. Electrons to Reactors Multiscale Modeling: Catalytic CO Oxidation over RuO 2

    DOE PAGES

    Sutton, Jonathan E.; Lorenzi, Juan M.; Krogel, Jaron T.; ...

    2018-04-20

    First-principles kinetic Monte Carlo (1p-kMC) simulations for CO oxidation on two RuO 2 facets, RuO 2(110) and RuO 2(111), were coupled to the computational fluid dynamics (CFD) simulations package MFIX, and reactor-scale simulations were then performed. 1p-kMC coupled with CFD has recently been shown as a feasible method for translating molecular scale mechanistic knowledge to the reactor scale, enabling comparisons to in situ and online experimental measurements. Only a few studies with such coupling have been published. This work incorporates multiple catalytic surface facets into the scale-coupled simulation, and three possibilities were investigated: the two possibilities of each facet individuallymore » being the dominant phase in the reactor, and also the possibility that both facets were present on the catalyst particles in the ratio predicted by an ab initio thermodynamics-based Wulff construction. When lateral interactions between adsorbates were included in the 1p-kMC simulations, the two surfaces, RuO 2(110) and RuO 2(111), were found to be of similar order-of-magnitude in activity for the pressure range of 1 × 10 –4 bar to 1 bar, with the RuO 2(110) surface-termination showing more simulated activity than the RuO 2(111) surface-termination. Coupling between the 1p-kMC and CFD was achieved with a lookup table generated by the error-based modified Shepard interpolation scheme. Isothermal reactor scale simulations were performed and compared to two separate experimental studies, conducted with reactant partial pressures of ≤0.1 bar. Simulations without an isothermality restriction were also conducted and showed that the simulated temperature gradient across the catalytic reactor bed is <0.5 K, which validated the use of the isothermality restriction for investigating the reactor-scale phenomenological temperature dependences. The approach with the Wulff construction based reactor simulations reproduced a trend similar to one experimental data set relatively well, with the (110) surface being more active at higher temperaures; in contrast, for the other experimental data set, our reactor simulations achieve surprisingly and perhaps fortuitously good agreement with the activity and phenomenological pressure dependence when it is assumed that the (111) facet is the only active facet present. Lastly, the active phase of catalytic CO oxidation over RuO 2 remains unsettled, but the present study presents proof of principle (and progress) toward more accurate multiscale modeling from electrons to reactors and new simulation results.« less

  19. MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes

    NASA Astrophysics Data System (ADS)

    Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.

    2017-11-01

    The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.

  20. Analysis of light incident location and detector position in early diagnosis of knee osteoarthritis by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Chen, Yanping; Chen, Yisha; Yan, Huangping; Wang, Xiaoling

    2017-01-01

    Early detection of knee osteoarthritis (KOA) is meaningful to delay or prevent the onset of osteoarthritis. In consideration of structural complexity of knee joint, position of light incidence and detector appears to be extremely important in optical inspection. In this paper, the propagation of 780-nm near infrared photons in three-dimensional knee joint model is simulated by Monte Carlo (MC) method. Six light incident locations are chosen in total to analyze the influence of incident and detecting location on the number of detected signal photons and signal to noise ratio (SNR). Firstly, a three-dimensional photon propagation model of knee joint is reconstructed based on CT images. Then, MC simulation is performed to study the propagation of photons in three-dimensional knee joint model. Photons which finally migrate out of knee joint surface are numerically analyzed. By analyzing the number of signal photons and SNR from the six given incident locations, the optimal incident and detecting location is defined. Finally, a series of phantom experiments are conducted to verify the simulation results. According to the simulation and phantom experiments results, the best incident location is near the right side of meniscus at the rear end of left knee joint and the detector is supposed to be set near patella, correspondingly.

  1. Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J.; Verhaegen, F.; Department of Oncology, Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4

    2013-11-15

    Purpose: X-ray scatter is a source of significant image quality loss in cone-beam computed tomography (CBCT). The use of Monte Carlo (MC) simulations separating primary and scattered photons has allowed the structure and nature of the scatter distribution in CBCT to become better elucidated. This work seeks to quantify the structure and determine a suitable basis function for the scatter distribution by examining its spectral components using Fourier analysis.Methods: The scatter distribution projection data were simulated using a CBCT MC model based on the EGSnrc code. CBCT projection data, with separated primary and scatter signal, were generated for a 30.6more » cm diameter water cylinder [single angle projection with varying axis-to-detector distance (ADD) and bowtie filters] and two anthropomorphic phantoms (head and pelvis, 360 projections sampled every 1°, with and without a compensator). The Fourier transform of the resulting scatter distributions was computed and analyzed both qualitatively and quantitatively. A novel metric called the scatter frequency width (SFW) is introduced to determine the scatter distribution's frequency content. The frequency content results are used to determine a set basis functions, consisting of low-frequency sine and cosine functions, to fit and denoise the scatter distribution generated from MC simulations using a reduced number of photons and projections. The signal recovery is implemented using Fourier filtering (low-pass Butterworth filter) and interpolation. Estimates of the scatter distribution are used to correct and reconstruct simulated projections.Results: The spatial and angular frequencies are contained within a maximum frequency of 0.1 cm{sup −1} and 7/(2π) rad{sup −1} for the imaging scenarios examined, with these values varying depending on the object and imaging setup (e.g., ADD and compensator). These data indicate spatial and angular sampling every 5 cm and π/7 rad (∼25°) can be used to properly capture the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.« less

  2. Contrast of Backscattered Electron SEM Images of Nanoparticles on Substrates with Complex Structure

    PubMed Central

    Müller, Erich; Fritsch-Decker, Susanne; Hettler, Simon; Störmer, Heike; Weiss, Carsten; Gerthsen, Dagmar

    2017-01-01

    This study is concerned with backscattered electron scanning electron microscopy (BSE SEM) contrast of complex nanoscaled samples which consist of SiO2 nanoparticles (NPs) deposited on indium-tin-oxide covered bulk SiO2 and glassy carbon substrates. BSE SEM contrast of NPs is studied as function of the primary electron energy and working distance. Contrast inversions are observed which prevent intuitive interpretation of NP contrast in terms of material contrast. Experimental data is quantitatively compared with Monte-Carlo- (MC-) simulations. Quantitative agreement between experimental data and MC-simulations is obtained if the transmission characteristics of the annular semiconductor detector are taken into account. MC-simulations facilitate the understanding of NP contrast inversions and are helpful to derive conditions for optimum material and topography contrast. PMID:29109816

  3. Contrast of Backscattered Electron SEM Images of Nanoparticles on Substrates with Complex Structure.

    PubMed

    Kowoll, Thomas; Müller, Erich; Fritsch-Decker, Susanne; Hettler, Simon; Störmer, Heike; Weiss, Carsten; Gerthsen, Dagmar

    2017-01-01

    This study is concerned with backscattered electron scanning electron microscopy (BSE SEM) contrast of complex nanoscaled samples which consist of SiO 2 nanoparticles (NPs) deposited on indium-tin-oxide covered bulk SiO 2 and glassy carbon substrates. BSE SEM contrast of NPs is studied as function of the primary electron energy and working distance. Contrast inversions are observed which prevent intuitive interpretation of NP contrast in terms of material contrast. Experimental data is quantitatively compared with Monte-Carlo- (MC-) simulations. Quantitative agreement between experimental data and MC-simulations is obtained if the transmission characteristics of the annular semiconductor detector are taken into account. MC-simulations facilitate the understanding of NP contrast inversions and are helpful to derive conditions for optimum material and topography contrast.

  4. Microphysics of Clouds with the Relaxed Arakawa-Schubert Scheme (McRAS). Part I: Design and Evaluation with GATE Phase III Data.

    NASA Astrophysics Data System (ADS)

    Sud, Y. C.; Walker, G. K.

    1999-09-01

    A prognostic cloud scheme named McRAS (Microphysics of Clouds with Relaxed Arakawa-Schubert Scheme) has been designed and developed with the aim of improving moist processes, microphysics of clouds, and cloud-radiation interactions in GCMs. McRAS distinguishes three types of clouds: convective, stratiform, and boundary layer. The convective clouds transform and merge into stratiform clouds on an hourly timescale, while the boundary layer clouds merge into the stratiform clouds instantly. The cloud condensate converts into precipitation following the autoconversion equations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, as well as diffuse both horizontally and vertically with a fully interactive cloud microphysics throughout the life cycle of the cloud, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry.An evaluation of McRAS in a single-column model (SCM) with the Global Atmospheric Research Program Atlantic Tropical Experiment (GATE) Phase III data has shown that, together with the rest of the model physics, McRAS can simulate the observed temperature, humidity, and precipitation without discernible systematic errors. The time history and time mean in-cloud water and ice distribution, fractional cloudiness, cloud optical thickness, origin of precipitation in the convective anvils and towers, and the convective updraft and downdraft velocities and mass fluxes all simulate a realistic behavior. Some of these diagnostics are not verifiable with data on hand. These SCM sensitivity tests show that (i) without clouds the simulated GATE-SCM atmosphere is cooler than observed; (ii) the model's convective scheme, RAS, is an important subparameterization of McRAS; and (iii) advection of cloud water substance is helpful in simulating better cloud distribution and cloud-radiation interaction. An evaluation of the performance of McRAS in the Goddard Earth Observing System II GCM is given in a companion paper (Part II).

  5. Micrometric precision of prosthetic dental crowns obtained by optical scanning and computer-aided designing/computer-aided manufacturing system

    NASA Astrophysics Data System (ADS)

    das Neves, Flávio Domingues; de Almeida Prado Naves Carneiro, Thiago; do Prado, Célio Jesus; Prudente, Marcel Santana; Zancopé, Karla; Davi, Letícia Resende; Mendonça, Gustavo; Soares, Carlos José

    2014-08-01

    The current study evaluated prosthetic dental crowns obtained by optical scanning and a computer-aided designing/computer-aided manufacturing system using micro-computed tomography to compare the marginal fit. The virtual models were obtained with four different scanning surfaces: typodont (T), regular impressions (RI), master casts (MC), and powdered master casts (PMC). Five virtual models were obtained for each group. For each model, a crown was designed on the software and milled from feldspathic ceramic blocks. Micro-CT images were obtained for marginal gap measurements and the data were statistically analyzed by one-way analysis of variance followed by Tukey's test. The mean vertical misfit was T=62.6±65.2 μm; MC=60.4±38.4 μm; PMC=58.1±38.0 μm, and RI=89.8±62.8 μm. Considering a percentage of vertical marginal gap of up to 75 μm, the results were T=71.5%, RI=49.2%, MC=69.6%, and PMC=71.2%. The percentages of horizontal overextension were T=8.5%, RI=0%, MC=0.8%, and PMC=3.8%. Based on the results, virtual model acquisition by scanning the typodont (simulated mouth) or MC, with or without powder, showed acceptable values for the marginal gap. The higher result of marginal gap of the RI group suggests that it is preferable to scan this directly from the mouth or from MC.

  6. Supernova Driving. II. Compressive Ratio in Molecular-cloud Turbulence

    NASA Astrophysics Data System (ADS)

    Pan, Liubin; Padoan, Paolo; Haugbølle, Troels; Nordlund, Åke

    2016-07-01

    The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of the compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.

  7. Should adhesive debonding be simulated for intra-radicular post stress analyses?

    PubMed

    Caldas, Ricardo A; Bacchi, Atais; Barão, Valentim A R; Versluis, Antheunis

    2018-06-23

    Elucidate the influence of debonding on stress distribution and maximum stresses for intra-radicular restorations. Five intra-radicular restorations were analyzed by finite element analysis (FEA): MP=metallic cast post core; GP=glass fiber post core; PP=pre-fabricated metallic post core; RE=resin endocrowns; CE=single piece ceramic endocrown. Two cervical preparations were considered: no ferule (f 0 ) and 2mm ferule (f 1 ). The simulation was conducted in three steps: (1) intact bonds at all contacts; (2) bond failure between crown and tooth; (3) bond failure among tooth, post and crown interfaces. Contact friction and separation between interfaces was modeled where bond failure occurred. Mohr-Coulomb stress ratios (σ MC ratio ) and fatigue safety factors (SF) for dentin structure were compared with published strength values, fatigue life, and fracture patterns of teeth with intra-radicular restorations. The σ MC ratio showed no differences among models at first step. The second step increased σ MC ratio at the ferule compared to step 1. At the third step, the σ MC ratio and SF for f 0 models were highly influenced by post material. CE and RE models had the highest values for σ MC ratio and lower SF. MP had the lowest σ MC ratio and higher SF. The f 1 models showed no relevant differences among them at the third step. FEA most closely predicted failure performance of intra-radicular posts when frictional contact was modeled. Results of analyses where all interfaces are assumed to be perfectly bonded should be considered with caution. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  8. Dosimetry study for a new in vivo X-ray fluorescence (XRF) bone lead measurement system

    NASA Astrophysics Data System (ADS)

    Nie, Huiling; Chettle, David; Luo, Liqiang; O'Meara, Joanne

    2007-10-01

    A new 109Cd γ-ray induced bone lead measurement system has been developed to reduce the minimum detectable limit (MDL) of the system. The system consists of four 16 mm diameter detectors. It requires a stronger source compared to the "conventional" system. A dosimetry study has been performed to estimate the dose delivered by this system. The study was carried out by using human-equivalent phantoms. Three sets of phantoms were made to estimate the dose delivered to three age groups: 5-year old, 10-year old and adults. Three approaches have been applied to evaluate the dose: calculations, Monte Carlo (MC) simulations, and experiments. Experimental results and analytical calculations were used to validate MC simulation. The experiments were performed by placing Panasonic UD-803AS TLDs at different places in phantoms that representing different organs. Due to the difficulty of obtaining the organ dose and the whole body dose solely by experiments and traditional calculations, the equivalent dose and effective dose were calculated by MC simulations. The result showed that the doses delivered to the organs other than the targeted lower leg are negligibly small. The total effective doses to the three age groups are 8.45/9.37 μSv (female/male), 4.20 μSv, and 0.26 μSv for 5-year old, 10-year old and adult, respectively. An approval to conduct human measurements on this system has been received from the Research Ethics Board based on this research.

  9. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less

  10. Southwestern Pine Forests Likely to Disappear

    ScienceCinema

    McDowell, Nathan

    2018-01-16

    A new study, led by Los Alamos National Laboratory's Nathan McDowell, suggests that widespread loss of a major forest type, the pine-juniper woodlands of the Southwestern U.S., could be wiped out by the end of this century due to climate change, and that conifers throughout much of the Northern Hemisphere may be on a similar trajectory. New results, reported in the journal Nature Climate Change, suggest that global models may underestimate predictions of forest death. McDowell and his large international team strove to provide the missing pieces of understanding tree death at three levels: plant, regional and global. The team rigorously developed and evaluated multiple process-based and empirical models against experimental results, and then compared these models to results from global vegetation models to examine independent simulations. They discovered that the global models simulated mortality throughout the Northern Hemisphere that was of similar magnitude, but much broader spatial scale, as the evaluated ecosystem models predicted for in the Southwest.

  11. Southwestern Pine Forests Likely to Disappear

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDowell, Nathan

    A new study, led by Los Alamos National Laboratory's Nathan McDowell, suggests that widespread loss of a major forest type, the pine-juniper woodlands of the Southwestern U.S., could be wiped out by the end of this century due to climate change, and that conifers throughout much of the Northern Hemisphere may be on a similar trajectory. New results, reported in the journal Nature Climate Change, suggest that global models may underestimate predictions of forest death. McDowell and his large international team strove to provide the missing pieces of understanding tree death at three levels: plant, regional and global. The teammore » rigorously developed and evaluated multiple process-based and empirical models against experimental results, and then compared these models to results from global vegetation models to examine independent simulations. They discovered that the global models simulated mortality throughout the Northern Hemisphere that was of similar magnitude, but much broader spatial scale, as the evaluated ecosystem models predicted for in the Southwest.« less

  12. A fast and complete GEANT4 and ROOT Object-Oriented Toolkit: GROOT

    NASA Astrophysics Data System (ADS)

    Lattuada, D.; Balabanski, D. L.; Chesnevskaya, S.; Costa, M.; Crucillà, V.; Guardo, G. L.; La Cognata, M.; Matei, C.; Pizzone, R. G.; Romano, S.; Spitaleri, C.; Tumino, A.; Xu, Y.

    2018-01-01

    Present and future gamma-beam facilities represent a great opportunity to validate and evaluate the cross-sections of many photonuclear reactions at near-threshold energies. Monte Carlo (MC) simulations are very important to evaluate the reaction rates and to maximize the detection efficiency but, unfortunately, they can be very cputime-consuming and in some cases very hard to reproduce, especially when exploring near-threshold cross-section. We developed a software that makes use of the validated tracking GEANT4 libraries and the n-body event generator of ROOT in order to provide a fast, realiable and complete MC tool to be used for nuclear physics experiments. This tool is indeed intended to be used for photonuclear reactions at γ-beam facilities with ELISSA (ELI Silicon Strip Array), a new detector array under development at the Extreme Light Infrastructure - Nuclear Physics (ELI-NP). We discuss the results of MC simulations performed to evaluate the effects of the electromagnetic induced background, of the straggling due to the target thickness and of the resolution of the silicon detectors.

  13. On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.

    PubMed

    Thomson, Rowan M; Kawrakow, Iwan

    2011-08-01

    The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.

  14. Systematic investigation on the validity of partition model dosimetry for 90Y radioembolization using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Aziz Hashikin, Nurul Ab; Yeong, Chai-Hong; Guatelli, Susanna; Jeet Abdullah, Basri Johan; Ng, Kwan-Hoong; Malaroda, Alessandra; Rosenfeld, Anatoly; Perkins, Alan Christopher

    2017-09-01

    We aimed to investigate the validity of the partition model (PM) in estimating the absorbed doses to liver tumour ({{D}T} ), normal liver tissue ({{D}NL} ) and lungs ({{D}L} ), when cross-fire irradiations between these compartments are being considered. MIRD-5 phantom incorporated with various treatment parameters, i.e. tumour involvement (TI), tumour-to-normal liver uptake ratio (T/N) and lung shunting (LS), were simulated using the Geant4 Monte Carlo (MC) toolkit. 108 track histories were generated for each combination of the three parameters to obtain the absorbed dose per activity uptake in each compartment (DT{{AT}} , DNL{{ANL}} , and DL{{AL}} ). The administered activities, A were estimated using PM, so as to achieve either limiting doses to normal liver, DNLlim or lungs, ~DLlim (70 or 30 Gy, respectively). Using these administered activities, the activity uptake in each compartment ({{A}T} , {{A}NL} , and {{A}L} ) was estimated and multiplied with the absorbed dose per activity uptake attained using the MC simulations, to obtain the actual dose received by each compartment. PM overestimated {{D}L} by 11.7% in all cases, due to the escaped particles from the lungs. {{D}T} and {{D}NL} by MC were largely affected by T/N, which were not considered by PM due to cross-fire exclusion at the tumour-normal liver boundary. These have resulted in the overestimation of {{D}T} by up to 8% and underestimation of {{D}NL} by as high as  -78%, by PM. When DNLlim was estimated via PM, the MC simulations showed significantly higher {{D}NL} for cases with higher T/N, and LS  ⩽  10%. All {{D}L} and {{D}T} by MC were overestimated by PM, thus DLlim were never exceeded. PM leads to inaccurate dose estimations due to the exclusion of cross-fire irradiation, i.e. between the tumour and normal liver tissue. Caution should be taken for cases with higher TI and T/N, and lower LS, as they contribute to major underestimation of {{D}NL} . For {{D}L} , a different correction factor for dose calculation may be used for improved accuracy.

  15. Stochastic Partial Differential Equation Solver for Hydroacoustic Modeling: Improvements to Paracousti Sound Propagation Solver

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2017-12-01

    Marine hydrokinetic (MHK) devices offer a clean, renewable alternative energy source for the future. Responsible utilization of MHK devices, however, requires that the effects of acoustic noise produced by these devices on marine life and marine-related human activities be well understood. Paracousti is a 3-D full waveform acoustic modeling suite that can accurately propagate MHK noise signals in the complex bathymetry found in the near-shore to open ocean environment and considers real properties of the seabed, water column, and air-surface interface. However, this is a deterministic simulation that assumes the environment and source are exactly known. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected noise levels within the marine environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. One method is to use Monte Carlo (MC) techniques where simulation results from a large number of deterministic solutions are aggregated to provide statistical properties of the output signal. However, MC methods can be computationally prohibitive since they can require tens of thousands or more simulations to build up an accurate representation of those statistical properties. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a small fraction of the computational cost of MC. We are developing a SPDE solver for the 3-D acoustic wave propagation problem called Paracousti-UQ to help regulators and operators assess the statistical properties of environmental noise produced by MHK devices. In this presentation, we present the SPDE method and compare statistical distributions of simulated acoustic signals in simple models to MC simulations to show the accuracy and efficiency of the SPDE method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  16. Astronaut William S. McArthur in training for contingency EVA in WETF

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Astronaut William S. McArthur, mission specialist, participates in training for contingency extravehicular activity (EVA) for the STS-58 mission. He is wearing the extravehicular mobility unit (EMU) minus his helmet. For simulation purposes, McArthur was about to be submerged to a point of neutral buoyancy in the JSC Weightless Environment Training Facility (WETF).

  17. Poster — Thur Eve — 47: Monte Carlo Simulation of Scp, Sc and Sp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.

    The in-water output ratio (Scp), in-air output ratio (Sc), and phantom scattering factor (Sp) are important parameters for radiotherapy dose calculation. Experimentally, Scp is obtained by measuring the dose rate ratio in water phantom, and Sc the water Kerma rate ratio in air. There is no method that allows direct measurement of Sp. Monte Carlo (MC) method has been used to simulate Scp and Sc in literatures, similar to experimental setup, but no MC direct simulation of Sp available yet to the best of our knowledge. We propose in this report a method of performing direct MC simulation of Sp.more » Starting from the definition, we derived that Sp of a clinical photon beam can be approximated by the ratio of the dose rates contributed from the primary beam for a given field size to the reference field size. Since only the primary beam is used, any Linac head scattering should be excluded from the simulation, which can be realized by using the incident electron as a scoring parameter for MU. We performed MC simulations for Scp, Sc and Sp. Scp matches well with golden beam data. Sp obtained by the proposed method agrees well with what is obtained using the traditional method, Sp=Scp/Sc. Since the smaller the field size, the more the primary beam dominates, our Sp simulation method is accurate for small field. By analyzing the calculated data, we found that this method can be used with no problem for large fields. The difference it introduced is clinically insignificant.« less

  18. Accelerated event-by-event Monte Carlo microdosimetric calculations of electrons and protons tracks on a multi-core CPU and a CUDA-enabled GPU.

    PubMed

    Kalantzis, Georgios; Tachibana, Hidenobu

    2014-01-01

    For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  20. Surface tension and phase coexistence properties of the lattice fluid from a virtual site removal Monte Carlo strategy

    NASA Astrophysics Data System (ADS)

    Provata, Astero; Prassas, Vassilis D.; Theodorou, Doros N.

    1997-10-01

    A thin liquid film of lattice fluid in equilibrium with its vapor is studied in 2 and 3 dimensions with canonical Monte Carlo simulation (MC) and Self-Consistent Field Theory (SCF) in the temperature range 0.45Tc to Tc, where Tc the liquid-gas critical temperature. Extending the approach of Oates et al. [Philos. Mag. B 61, 337 (1990)] to anisotropic systems, we develop a method for the MC computation of the transverse and normal pressure profiles, hence of the surface tension, based on virtual removals of individual sites or blocks of sites from the system. Results from implementation of this new method, obtained at very modest computational cost, are in reasonable agreement with exact values and other MC estimates of the surface tension of the 2-d and 3-d model systems, respectively. SCF estimates of the interfacial density profiles, the surface tension, the vapor pressure curve and the binodal curve compare well with MC results away from Tc, but show the expected deviations at high temperatures.

  1. The dose distribution of low dose rate Cs-137 in intracavitary brachytherapy: comparison of Monte Carlo simulation, treatment planning calculation and polymer gel measurement

    NASA Astrophysics Data System (ADS)

    Fragoso, M.; Love, P. A.; Verhaegen, F.; Nalder, C.; Bidmead, A. M.; Leach, M.; Webb, S.

    2004-12-01

    In this study, the dose distribution delivered by low dose rate Cs-137 brachytherapy sources was investigated using Monte Carlo (MC) techniques and polymer gel dosimetry. The results obtained were compared with a commercial treatment planning system (TPS). The 20 mm and the 30 mm diameter Selectron vaginal applicator set (Nucletron) were used for this study. A homogeneous and a heterogeneous—with an air cavity—polymer gel phantom was used to measure the dose distribution from these sources. The same geometrical set-up was used for the MC calculations. Beyond the applicator tip, differences in dose as large as 20% were found between the MC and TPS. This is attributed to the presence of stainless steel in the applicator and source set, which are not considered by the TPS calculations. Beyond the air cavity, differences in dose of around 5% were noted, due to the TPS assuming a homogeneous water medium. The polymer gel results were in good agreement with the MC calculations for all the cases investigated.

  2. Technical Note: Defining cyclotron-based clinical scanning proton machines in a FLUKA Monte Carlo system.

    PubMed

    Fiorini, Francesca; Schreuder, Niek; Van den Heuvel, Frank

    2018-02-01

    Cyclotron-based pencil beam scanning (PBS) proton machines represent nowadays the majority and most affordable choice for proton therapy facilities, however, their representation in Monte Carlo (MC) codes is more complex than passively scattered proton system- or synchrotron-based PBS machines. This is because degraders are used to decrease the energy from the cyclotron maximum energy to the desired energy, resulting in a unique spot size, divergence, and energy spread depending on the amount of degradation. This manuscript outlines a generalized methodology to characterize a cyclotron-based PBS machine in a general-purpose MC code. The code can then be used to generate clinically relevant plans starting from commercial TPS plans. The described beam is produced at the Provision Proton Therapy Center (Knoxville, TN, USA) using a cyclotron-based IBA Proteus Plus equipment. We characterized the Provision beam in the MC FLUKA using the experimental commissioning data. The code was then validated using experimental data in water phantoms for single pencil beams and larger irregular fields. Comparisons with RayStation TPS plans are also presented. Comparisons of experimental, simulated, and planned dose depositions in water plans show that same doses are calculated by both programs inside the target areas, while penumbrae differences are found at the field edges. These differences are lower for the MC, with a γ(3%-3 mm) index never below 95%. Extensive explanations on how MC codes can be adapted to simulate cyclotron-based scanning proton machines are given with the aim of using the MC as a TPS verification tool to check and improve clinical plans. For all the tested cases, we showed that dose differences with experimental data are lower for the MC than TPS, implying that the created FLUKA beam model is better able to describe the experimental beam. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  3. TH-A-19A-08: Intel Xeon Phi Implementation of a Fast Multi-Purpose Monte Carlo Simulation for Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, K; Lee, J; Sterpin, E

    2014-06-15

    Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed andmore » accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time constraints. It has been successfully validated with Geant4. This work has been financialy supported by InVivoIGT, a public/private partnership between UCL and IBA.« less

  4. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    PubMed

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  5. The effect of statistical noise on IMRT plan quality and convergence for MC-based and MC-correction-based optimized treatment plans.

    PubMed

    Siebers, Jeffrey V

    2008-04-04

    Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.

  6. Using McStas for modelling complex optics, using simple building bricks

    NASA Astrophysics Data System (ADS)

    Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim

    2011-04-01

    The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.

  7. Moment analysis method as applied to the 2S --> 2P transition in cryogenic alkali metal/rare gas matrices.

    PubMed

    Terrill Vosbein, Heidi A; Boatz, Jerry A; Kenney, John W

    2005-12-22

    The moment analysis method (MA) has been tested for the case of 2S --> 2P ([core]ns1 --> [core]np1) transitions of alkali metal atoms (M) doped into cryogenic rare gas (Rg) matrices using theoretically validated simulations. Theoretical/computational M/Rg system models are constructed with precisely defined parameters that closely mimic known M/Rg systems. Monte Carlo (MC) techniques are then employed to generate simulated absorption and magnetic circular dichroism (MCD) spectra of the 2S --> 2P M/Rg transition to which the MA method can be applied with the goal of seeing how effective the MA method is in re-extracting the M/Rg system parameters from these known simulated systems. The MA method is summarized in general, and an assessment is made of the use of the MA method in the rigid shift approximation typically used to evaluate M/Rg systems. The MC-MCD simulation technique is summarized, and validating evidence is presented. The simulation results and the assumptions used in applying MA to M/Rg systems are evaluated. The simulation results on Na/Ar demonstrate that the MA method does successfully re-extract the 2P spin-orbit coupling constant and Landé g-factor values initially used to build the simulations. However, assigning physical significance to the cubic and noncubic Jahn-Teller (JT) vibrational mode parameters in cryogenic M/Rg systems is not supported.

  8. Cavity theory applications for kilovoltage cellular dosimetry.

    PubMed

    Oliver, P A K; Thomson, Rowan M

    2017-06-07

    Relationships between macroscopic (bulk tissue) and microscopic (cellular) dose descriptors are investigated using cavity theory and Monte Carlo (MC) simulations. Small, large, and multiple intermediate cavity theory (SCT, LCT, and ICT, respectively) approaches are considered for 20 to 370 keV incident photons; ICT is a sum of SCT and LCT contributions weighted by parameter d. Considering μm-sized cavities of water in bulk tissue phantoms, different cavity theory approaches are evaluated via comparison of [Formula: see text] (where D w,m is dose-to-water-in-medium and D m,m is dose-to-medium-in-medium) with MC results. The best overall agreement is achieved with an ICT approach in which [Formula: see text], where L is the mean chord length of the cavity and β is given by [Formula: see text] (R CSDA is the continuous slowing down approximation range of an electron of energy equal to that of incident photons). Cell nucleus doses, D nuc , computed with this ICT approach are compared with those from MC simulations involving multicellular soft tissue models considering a representative range of cell/nucleus sizes and elemental compositions. In [Formula: see text] of cases, ICT and MC predictions agree within [Formula: see text]; disagreement is at most 8.8%. These results suggest that cavity theory may be useful for linking doses from model-based dose calculation algorithms (MBDCAs) with energy deposition in cellular targets. Finally, based on the suggestion that clusters of water molecules associated with DNA are important radiobiological targets, two approaches for estimating dose-to-water by application of SCT to MC results for D m,m or D nuc are compared. Results for these two estimates differ by up to [Formula: see text], demonstrating the sensitivity of energy deposition within a small volume of water in nucleus to the geometry and composition of its surroundings. In terms of the debate over the dose specification medium for MBDCAs, these results do not support conversion of D m,m to D w,m using SCT.

  9. Cavity theory applications for kilovoltage cellular dosimetry

    NASA Astrophysics Data System (ADS)

    Oliver, P. A. K.; Thomson, Rowan M.

    2017-06-01

    Relationships between macroscopic (bulk tissue) and microscopic (cellular) dose descriptors are investigated using cavity theory and Monte Carlo (MC) simulations. Small, large, and multiple intermediate cavity theory (SCT, LCT, and ICT, respectively) approaches are considered for 20 to 370 keV incident photons; ICT is a sum of SCT and LCT contributions weighted by parameter d. Considering μm-sized cavities of water in bulk tissue phantoms, different cavity theory approaches are evaluated via comparison of Dw, m/Dm, m (where D w,m is dose-to-water-in-medium and D m,m is dose-to-medium-in-medium) with MC results. The best overall agreement is achieved with an ICT approach in which d=(1-e-β L)/(β L) , where L is the mean chord length of the cavity and β is given by e-β R_CSDA=0.04 (R CSDA is the continuous slowing down approximation range of an electron of energy equal to that of incident photons). Cell nucleus doses, D nuc, computed with this ICT approach are compared with those from MC simulations involving multicellular soft tissue models considering a representative range of cell/nucleus sizes and elemental compositions. In 91% of cases, ICT and MC predictions agree within 3% ; disagreement is at most 8.8%. These results suggest that cavity theory may be useful for linking doses from model-based dose calculation algorithms (MBDCAs) with energy deposition in cellular targets. Finally, based on the suggestion that clusters of water molecules associated with DNA are important radiobiological targets, two approaches for estimating dose-to-water by application of SCT to MC results for D m,m or D nuc are compared. Results for these two estimates differ by up to 35% , demonstrating the sensitivity of energy deposition within a small volume of water in nucleus to the geometry and composition of its surroundings. In terms of the debate over the dose specification medium for MBDCAs, these results do not support conversion of D m,m to D w,m using SCT.

  10. McMAC: Towards a MAC Protocol with Multi-Constrained QoS Provisioning for Diverse Traffic in Wireless Body Area Networks

    PubMed Central

    Monowar, Muhammad Mostafa; Hassan, Mohammad Mehedi; Bajaber, Fuad; Al-Hussein, Musaed; Alamri, Atif

    2012-01-01

    The emergence of heterogeneous applications with diverse requirements for resource-constrained Wireless Body Area Networks (WBANs) poses significant challenges for provisioning Quality of Service (QoS) with multi-constraints (delay and reliability) while preserving energy efficiency. To address such challenges, this paper proposes McMAC, a MAC protocol with multi-constrained QoS provisioning for diverse traffic classes in WBANs. McMAC classifies traffic based on their multi-constrained QoS demands and introduces a novel superframe structure based on the “transmit-whenever-appropriate” principle, which allows diverse periods for diverse traffic classes according to their respective QoS requirements. Furthermore, a novel emergency packet handling mechanism is proposed to ensure packet delivery with the least possible delay and the highest reliability. McMAC is also modeled analytically, and extensive simulations were performed to evaluate its performance. The results reveal that McMAC achieves the desired delay and reliability guarantee according to the requirements of a particular traffic class while achieving energy efficiency. PMID:23202224

  11. A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.

    2005-01-01

    This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.

  12. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE PAGES

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...

    2017-11-07

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  13. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  14. Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code

    NASA Astrophysics Data System (ADS)

    Panettieri, Vanessa; Amor Duch, Maria; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat

    2007-01-01

    The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm2 and a thickness of 0.5 µm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water™ build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water™ cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.

  15. Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code.

    PubMed

    Panettieri, Vanessa; Duch, Maria Amor; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat

    2007-01-07

    The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm(2) and a thickness of 0.5 microm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.

  16. Development of an effective dose coefficient database using a computational human phantom and Monte Carlo simulations to evaluate exposure dose for the usage of NORM-added consumer products.

    PubMed

    Yoo, Do Hyeon; Shin, Wook-Geun; Lee, Jaekook; Yeom, Yeon Soo; Kim, Chan Hyeong; Chang, Byung-Uck; Min, Chul Hee

    2017-11-01

    After the Fukushima accident in Japan, the Korean Government implemented the "Act on Protective Action Guidelines Against Radiation in the Natural Environment" to regulate unnecessary radiation exposure to the public. However, despite the law which came into effect in July 2012, an appropriate method to evaluate the equivalent and effective doses from naturally occurring radioactive material (NORM) in consumer products is not available. The aim of the present study is to develop and validate an effective dose coefficient database enabling the simple and correct evaluation of the effective dose due to the usage of NORM-added consumer products. To construct the database, we used a skin source method with a computational human phantom and Monte Carlo (MC) simulation. For the validation, the effective dose was compared between the database using interpolation method and the original MC method. Our result showed a similar equivalent dose across the 26 organs and a corresponding average dose between the database and the MC calculations of < 5% difference. The differences in the effective doses were even less, and the result generally show that equivalent and effective doses can be quickly calculated with the database with sufficient accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. SU-E-T-598: Parametric Equation for Quick and Reliable Estimate of Stray Neutron Doses in Proton Therapy and Application for Intracranial Tumor Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonfrate, A; Farah, J; Sayah, R

    2015-06-15

    Purpose: Development of a parametric equation suitable for a daily use in routine clinic to provide estimates of stray neutron doses in proton therapy. Methods: Monte Carlo (MC) calculations using the UF-NCI 1-year-old phantom were exercised to determine the variation of stray neutron doses as a function of irradiation parameters while performing intracranial treatments. This was done by individually changing the proton beam energy, modulation width, collimator aperture and thickness, compensator thickness and the air gap size while their impact on neutron doses were put into a single equation. The variation of neutron doses with distance from the target volumemore » was also included in it. Then, a first step consisted in establishing the fitting coefficients by using 221 learning data which were neutron absorbed doses obtained with MC simulations while a second step consisted in validating the final equation. Results: The variation of stray neutron doses with irradiation parameters were fitted with linear, polynomial, etc. model while a power-law model was used to fit the variation of stray neutron doses with the distance from the target volume. The parametric equation fitted well MC simulations while establishing fitting coefficients as the discrepancies on the estimate of neutron absorbed doses were within 10%. The discrepancy can reach ∼25% for the bladder, the farthest organ from the target volume. Finally, the validation showed results in compliance with MC calculations since the discrepancies were also within 10% for head-and-neck and thoracic organs while they can reach ∼25%, again for pelvic organs. Conclusion: The parametric equation presents promising results and will be validated for other target sites as well as other facilities to go towards a universal method.« less

  18. Constant-pH molecular dynamics using stochastic titration

    NASA Astrophysics Data System (ADS)

    Baptista, António M.; Teixeira, Vitor H.; Soares, Cláudio M.

    2002-09-01

    A new method is proposed for performing constant-pH molecular dynamics (MD) simulations, that is, MD simulations where pH is one of the external thermodynamic parameters, like the temperature or the pressure. The protonation state of each titrable site in the solute is allowed to change during a molecular mechanics (MM) MD simulation, the new states being obtained from a combination of continuum electrostatics (CE) calculations and Monte Carlo (MC) simulation of protonation equilibrium. The coupling between the MM/MD and CE/MC algorithms is done in a way that ensures a proper Markov chain, sampling from the intended semigrand canonical distribution. This stochastic titration method is applied to succinic acid, aimed at illustrating the method and examining the choice of its adjustable parameters. The complete titration of succinic acid, using constant-pH MD simulations at different pH values, gives a clear picture of the coupling between the trans/gauche isomerization and the protonation process, making it possible to reconcile some apparently contradictory results of previous studies. The present constant-pH MD method is shown to require a moderate increase of computational cost when compared to the usual MD method.

  19. Comparison of the McGrath® Series 5 and GlideScope® Ranger with the Macintosh laryngoscope by paramedics

    PubMed Central

    2011-01-01

    Background Out-of-hospital endotracheal intubation performed by paramedics using the Macintosh blade for direct laryngoscopy is associated with a high incidence of complications. The novel technique of video laryngoscopy has been shown to improve glottic view and intubation success in the operating room. The aim of this study was to compare glottic view, time of intubation and success rate of the McGrath® Series 5 and GlideScope® Ranger video laryngoscopes with the Macintosh laryngoscope by paramedics. Methods Thirty paramedics performed six intubations in a randomised order with all three laryngoscopes in an airway simulator with a normal airway. Subsequently, every participant performed one intubation attempt with each device in the same manikin with simulated cervical spine rigidity using a cervical collar. Glottic view, time until visualisation of the glottis and time until first ventilation were evaluated. Results Time until first ventilation was equivalent after three intubations in the first scenario. In the scenario with decreased cervical motion, the time until first ventilation was longer using the McGrath® compared to the GlideScope® and AMacintosh (p < 0.01). The success rate for endotracheal intubation was similar for all three devices. Glottic view was only improved using the McGrath® device (p < 0.001) compared to using the Macintosh blade. Conclusions The learning curve for video laryngoscopy in paramedics was steep in this study. However, these data do not support prehospital use of the McGrath® and GlideScope® devices by paramedics. PMID:21241469

  20. Quantitative comparison of simulated and measured signals in the STEM mode of a SEM

    NASA Astrophysics Data System (ADS)

    Walker, C. G. H.; Konvalina, I.; Mika, F.; Frank, L.; Müllerová, I.

    2018-01-01

    The transmission of electrons with energies 15 keV and 30 keV through Si and Au films of 100 nm thickness each have been studied in a Scanning Transmission Electron Microscope. The electrons that were transmitted through the films were detected using a multi-annular photo-detector consisting of a central Bright Field (BF) and several Dark Field (DF) detectors. For the experiment the detector was gradually offset from the axis and the signal from the central BF detector was studied as a function of the offset distance and compared with MC simulations. The experiment showed better agreement between experiment and several different MC simulations as compared to previous results, but differences were still found particularly for low angle scattering from Si. Data from Au suggest that high energy secondary electrons contribute to the signal on the central BF detector for low primary beam energies, when the STEM detector is in its usual central position.

  1. Climate change and fire effects on a prairie-woodland ecotone: projecting species range shifts with a dynamic global vegetation model

    USGS Publications Warehouse

    King, David A.; Bachelet, Dominique M.; Symstad, Amy J.

    2013-01-01

    Large shifts in species ranges have been predicted under future climate scenarios based primarily on niche-based species distribution models. However, the mechanisms that would cause such shifts are uncertain. Natural and anthropogenic fires have shaped the distributions of many plant species, but their effects have seldom been included in future projections of species ranges. Here, we examine how the combination of climate and fire influence historical and future distributions of the ponderosa pine–prairie ecotone at the edge of the Black Hills in South Dakota, USA, as simulated by MC1, a dynamic global vegetation model that includes the effects of fire, climate, and atmospheric CO2 concentration on vegetation dynamics. For this purpose, we parameterized MC1 for ponderosa pine in the Black Hills, designating the revised model as MC1-WCNP. Results show that fire frequency, as affected by humidity and temperature, is central to the simulation of historical prairies in the warmer lowlands versus woodlands in the cooler, moister highlands. Based on three downscaled general circulation model climate projections for the 21st century, we simulate greater frequencies of natural fire throughout the area due to substantial warming and, for two of the climate projections, lower relative humidity. However, established ponderosa pine forests are relatively fire resistant, and areas that were initially wooded remained so over the 21st century for most of our future climate x fire management scenarios. This result contrasts with projections for ponderosa pine based on climatic niches, which suggest that its suitable habitat in the Black Hills will be greatly diminished by the middle of the 21st century. We hypothesize that the differences between the future predictions from these two approaches are due in part to the inclusion of fire effects in MC1, and we highlight the importance of accounting for fire as managed by humans in assessing both historical species distributions and future climate change effects.

  2. Climate change and fire effects on a prairie-woodland ecotone: projecting species range shifts with a dynamic global vegetation model.

    PubMed

    King, David A; Bachelet, Dominique M; Symstad, Amy J

    2013-12-01

    Large shifts in species ranges have been predicted under future climate scenarios based primarily on niche-based species distribution models. However, the mechanisms that would cause such shifts are uncertain. Natural and anthropogenic fires have shaped the distributions of many plant species, but their effects have seldom been included in future projections of species ranges. Here, we examine how the combination of climate and fire influence historical and future distributions of the ponderosa pine-prairie ecotone at the edge of the Black Hills in South Dakota, USA, as simulated by MC1, a dynamic global vegetation model that includes the effects of fire, climate, and atmospheric CO2 concentration on vegetation dynamics. For this purpose, we parameterized MC1 for ponderosa pine in the Black Hills, designating the revised model as MC1-WCNP. Results show that fire frequency, as affected by humidity and temperature, is central to the simulation of historical prairies in the warmer lowlands versus woodlands in the cooler, moister highlands. Based on three downscaled general circulation model climate projections for the 21st century, we simulate greater frequencies of natural fire throughout the area due to substantial warming and, for two of the climate projections, lower relative humidity. However, established ponderosa pine forests are relatively fire resistant, and areas that were initially wooded remained so over the 21st century for most of our future climate x fire management scenarios. This result contrasts with projections for ponderosa pine based on climatic niches, which suggest that its suitable habitat in the Black Hills will be greatly diminished by the middle of the 21st century. We hypothesize that the differences between the future predictions from these two approaches are due in part to the inclusion of fire effects in MC1, and we highlight the importance of accounting for fire as managed by humans in assessing both historical species distributions and future climate change effects.

  3. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, W; Farr, J

    2015-06-15

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MCmore » simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.« less

  4. Rotating and translating anthropomorphic head voxel models to establish an horizontal Frankfort plane for dental CBCT Monte Carlo simulations: a dose comparison study

    NASA Astrophysics Data System (ADS)

    Stratis, A.; Zhang, G.; Jacobs, R.; Bogaerts, R.; Bosmans, H.

    2016-12-01

    In order to carry out Monte Carlo (MC) dosimetry studies, voxel phantoms, modeling human anatomy, and organ-based segmentation of CT image data sets are applied to simulation frameworks. The resulting voxel phantoms preserve patient CT acquisition geometry; in the case of head voxel models built upon head CT images, the head support with which CT scanners are equipped introduces an inclination to the head, and hence to the head voxel model. In dental cone beam CT (CBCT) imaging, patients are always positioned in such a way that the Frankfort line is horizontal, implying that there is no head inclination. The orientation of the head is important, as it influences the distance of critical radiosensitive organs like the thyroid and the esophagus from the x-ray tube. This work aims to propose a procedure to adjust head voxel phantom orientation, and to investigate the impact of head inclination on organ doses in dental CBCT MC dosimetry studies. The female adult ICRP, and three in-house-built paediatric voxel phantoms were in this study. An EGSnrc MC framework was employed to simulate two commonly used protocols; a Morita Accuitomo 170 dental CBCT scanner (FOVs: 60  ×  60 mm2 and 80  ×  80 mm2, standard resolution), and a 3D Teeth protocol (FOV: 100  ×  90 mm2) in a Planmeca Promax 3D MAX scanner. Result analysis revealed large absorbed organ dose differences in radiosensitive organs between the original and the geometrically corrected voxel models of this study, ranging from  -45.6% to 39.3%. Therefore, accurate dental CBCT MC dose calculations require geometrical adjustments to be applied to head voxel models.

  5. Effect of Gold Nanoparticles on Prostate Dose Distribution under Ir-192 Internal and 18 MV External Radiotherapy Procedures Using Gel Dosimetry and Monte Carlo Method.

    PubMed

    Khosravi, H; Hashemi, B; Mahdavi, S R; Hejazi, P

    2015-03-01

    Gel polymers are considered as new dosimeters for determining radiotherapy dose distribution in three dimensions. The ability of a new formulation of MAGIC-f polymer gel was assessed by experimental measurement and Monte Carlo (MC) method for studying the effect of gold nanoparticles (GNPs) in prostate dose distributions under the internal Ir-192 and external 18MV radiotherapy practices. A Plexiglas phantom was made representing human pelvis. The GNP shaving 15 nm in diameter and 0.1 mM concentration were synthesized using chemical reduction method. Then, a new formulation of MAGIC-f gel was synthesized. The fabricated gel was poured in the tubes located at the prostate (with and without the GNPs) and bladder locations of the phantom. The phantom was irradiated to an Ir-192 source and 18 MV beam of a Varian linac separately based on common radiotherapy procedures used for prostate cancer. After 24 hours, the irradiated gels were read using a Siemens 1.5 Tesla MRI scanner. The absolute doses at the reference points and isodose curves resulted from the experimental measurement of the gels and MC simulations following the internal and external radiotherapy practices were compared. The mean absorbed doses measured with the gel in the presence of the GNPs in prostate were 15% and 8 % higher than the corresponding values without the GNPs under the internal and external radiation therapies, respectively. MC simulations also indicated a dose increase of 14 % and 7 % due to presence of the GNPs, for the same experimental internal and external radiotherapy practices, respectively. There was a good agreement between the dose enhancement factors (DEFs) estimated with MC simulations and experiment gel measurements due to the GNPs. The results indicated that the polymer gel dosimetry method as developed and used in this study, can be recommended as a reliable method for investigating the DEF of GNPs in internal and external radiotherapy practices.

  6. Climate change and fire effects on a prairie–woodland ecotone: projecting species range shifts with a dynamic global vegetation model

    PubMed Central

    King, David A; Bachelet, Dominique M; Symstad, Amy J

    2013-01-01

    Large shifts in species ranges have been predicted under future climate scenarios based primarily on niche-based species distribution models. However, the mechanisms that would cause such shifts are uncertain. Natural and anthropogenic fires have shaped the distributions of many plant species, but their effects have seldom been included in future projections of species ranges. Here, we examine how the combination of climate and fire influence historical and future distributions of the ponderosa pine–prairie ecotone at the edge of the Black Hills in South Dakota, USA, as simulated by MC1, a dynamic global vegetation model that includes the effects of fire, climate, and atmospheric CO2 concentration on vegetation dynamics. For this purpose, we parameterized MC1 for ponderosa pine in the Black Hills, designating the revised model as MC1-WCNP. Results show that fire frequency, as affected by humidity and temperature, is central to the simulation of historical prairies in the warmer lowlands versus woodlands in the cooler, moister highlands. Based on three downscaled general circulation model climate projections for the 21st century, we simulate greater frequencies of natural fire throughout the area due to substantial warming and, for two of the climate projections, lower relative humidity. However, established ponderosa pine forests are relatively fire resistant, and areas that were initially wooded remained so over the 21st century for most of our future climate x fire management scenarios. This result contrasts with projections for ponderosa pine based on climatic niches, which suggest that its suitable habitat in the Black Hills will be greatly diminished by the middle of the 21st century. We hypothesize that the differences between the future predictions from these two approaches are due in part to the inclusion of fire effects in MC1, and we highlight the importance of accounting for fire as managed by humans in assessing both historical species distributions and future climate change effects. PMID:24455138

  7. Assessment of contrast gain signature in inferred magnocellular and parvocellular pathways in patients with glaucoma

    PubMed Central

    Sun, Hao; Swanson, William H.; Arvidson, Brian; Dul, Mitchell W.

    2010-01-01

    PURPOSE Contrast gain signatures of inferred magnocellular and parvocellular postreceptoral pathways were assessed for patients with glaucoma using a contrast discrimination paradigm developed by Pokorny and Smith. The potential causes for changes in contrast gain signature were investigated using model simulations of ganglion cell contrast responses. METHODS Foveal contrast discrimination thresholds were measured with a pedestal-Δ-pedestal paradigm developed by Pokorny and Smith (1997). Stimuli were 27 msec luminance increments superimposed on 227 msec pulsed Δ-pedestals. Contrast thresholds and contrast gain signatures mediated by the inferred magnocellular (MC) and parvocellular (PC) pathways were assessed using linear fits to contrast discrimination thresholds at either lower or higher Δ-pedestal contrasts, respectively. Twenty-seven patients with glaucoma were tested, as well as 16 age-similar control subjects free of eye disease. RESULTS Contrast sensitivity and contrast gain signature mediated by the inferred MC pathway were lower for the glaucoma group, and reduced contrast gain signature was correlated with reduced contrast sensitivity (r2=45%, p<0.0005). These two parameters mediated by the inferred PC pathway were little affected for the glaucoma group. Model simulations suggest that the reduced contrast sensitivity and contrast gain signature were consistent with the hypothesis that reduced MC ganglion cell dendritic complexity can lead to reduced effective retinal illuminance, and hence increased semi-saturation contrast of the ganglion cell contrast response functions. CONCLUSIONS The contrast sensitivity and contrast gain signature of the inferred MC pathway were reduced in patients with glaucoma. The results were consistent with a model of ganglion cell dysfunction due to reduced synaptic density. PMID:18501947

  8. Comparison of Direct Sequence Spread Spectrum Rake Receiver with a Maximum Ratio Combining Multicarrier Spread Spectrum Receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daryl Leon Wasden; Hussein Moradi; Behrouz Farhang-Broujeny

    2014-06-01

    This paper presents a theoretical analysis of the performance of a filter bank-based multicarrier spread spectrum (FB-MC-SS) system. We consider an FB-MC-SS setup where each data symbol is spread across multiple subcarriers, but there is no spreading in time. The results are then compared with those of the well-known direct sequence spread spectrum (DS-SS) system with a rake receiver for its best performance. We compare the two systems when the channel noise is white. We prove that as the processing gains of the two systems tend to infinity both approach the same performance. However, numerical simulations show that, in practice,more » where processing gain is limited, FB-MC-SS outperforms DS-SS.« less

  9. Grid Block Design Based on Monte Carlo Simulated Dosimetry, the Linear Quadratic and Hug–Kellerer Radiobiological Models

    PubMed Central

    Gholami, Somayeh; Nedaie, Hassan Ali; Longo, Francesco; Ay, Mohammad Reza; Dini, Sharifeh A.; Meigooni, Ali S.

    2017-01-01

    Purpose: The clinical efficacy of Grid therapy has been examined by several investigators. In this project, the hole diameter and hole spacing in Grid blocks were examined to determine the optimum parameters that give a therapeutic advantage. Methods: The evaluations were performed using Monte Carlo (MC) simulation and commonly used radiobiological models. The Geant4 MC code was used to simulate the dose distributions for 25 different Grid blocks with different hole diameters and center-to-center spacing. The therapeutic parameters of these blocks, namely, the therapeutic ratio (TR) and geometrical sparing factor (GSF) were calculated using two different radiobiological models, including the linear quadratic and Hug–Kellerer models. In addition, the ratio of the open to blocked area (ROTBA) is also used as a geometrical parameter for each block design. Comparisons of the TR, GSF, and ROTBA for all of the blocks were used to derive the parameters for an optimum Grid block with the maximum TR, minimum GSF, and optimal ROTBA. A sample of the optimum Grid block was fabricated at our institution. Dosimetric characteristics of this Grid block were measured using an ionization chamber in water phantom, Gafchromic film, and thermoluminescent dosimeters in Solid Water™ phantom materials. Results: The results of these investigations indicated that Grid blocks with hole diameters between 1.00 and 1.25 cm and spacing of 1.7 or 1.8 cm have optimal therapeutic parameters (TR > 1.3 and GSF~0.90). The measured dosimetric characteristics of the optimum Grid blocks including dose profiles, percentage depth dose, dose output factor (cGy/MU), and valley-to-peak ratio were in good agreement (±5%) with the simulated data. Conclusion: In summary, using MC-based dosimetry, two radiobiological models, and previously published clinical data, we have introduced a method to design a Grid block with optimum therapeutic response. The simulated data were reproduced by experimental data. PMID:29296035

  10. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  11. Dosimetry applications in GATE Monte Carlo toolkit.

    PubMed

    Papadimitroulas, Panagiotis

    2017-09-01

    Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. In vitro Dosimetric Study of Biliary Stent Loaded with Radioactive 125I Seeds

    PubMed Central

    Yao, Li-Hong; Wang, Jun-Jie; Shang, Charles; Jiang, Ping; Lin, Lei; Sun, Hai-Tao; Liu, Lu; Liu, Hao; He, Di; Yang, Rui-Jie

    2017-01-01

    Background: A novel radioactive 125I seed-loaded biliary stent has been used for patients with malignant biliary obstruction. However, the dosimetric characteristics of the stents remain unclear. Therefore, we aimed to describe the dosimetry of the stents of different lengths — with different number as well as activities of 125I seeds. Methods: The radiation dosimetry of three representative radioactive stent models was evaluated using a treatment planning system (TPS), thermoluminescent dosimeter (TLD) measurements, and Monte Carlo (MC) simulations. In the process of TPS calculation and TLD measurement, two different water-equivalent phantoms were designed to obtain cumulative radial dose distribution. Calibration procedures using TLD in the designed phantom were also conducted. MC simulations were performed using the Monte Carlo N-Particle eXtended version 2.5 general purpose code to calculate the radioactive stent's three-dimensional dose rate distribution in liquid water. Analysis of covariance was used to examine the factors influencing radial dose distribution of the radioactive stent. Results: The maximum reduction in cumulative radial dose was 26% when the seed activity changed from 0.5 mCi to 0.4 mCi for the same length of radioactive stents. The TLD's dose response in the range of 0–10 mGy irradiation by 137Cs γ-ray was linear: y = 182225x − 6651.9 (R2= 0.99152; y is the irradiation dose in mGy, x is the TLDs’ reading in nC). When TLDs were irradiated by different energy radiation sources to a dose of 1 mGy, reading of TLDs was different. Doses at a distance of 0.1 cm from the three stents’ surface simulated by MC were 79, 93, and 97 Gy. Conclusions: TPS calculation, TLD measurement, and MC simulation were performed and were found to be in good agreement. Although the whole experiment was conducted in water-equivalent phantom, data in our evaluation may provide a theoretical basis for dosimetry for the clinical application. PMID:28469106

  13. SU-E-T-795: Validations of Dose Calculation Accuracy of Acuros BV in High-Dose-Rate (HDR) Brachytherapy with a Shielded Cylinder Applicator Using Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Department of Engineering Physics, Tsinghua University, Beijing; Tian, Z

    Purpose: Acuros BV has become available to perform accurate dose calculations in high-dose-rate (HDR) brachytherapy with phantom heterogeneity considered by solving the Boltzmann transport equation. In this work, we performed validation studies regarding the dose calculation accuracy of Acuros BV in cases with a shielded cylinder applicator using Monte Carlo (MC) simulations. Methods: Fifteen cases were considered in our studies, covering five different diameters of the applicator and three different shielding degrees. For each case, a digital phantom was created in Varian BrachyVision with the cylinder applicator inserted in the middle of a large water phantom. A treatment plan withmore » eight dwell positions was generated for these fifteen cases. Dose calculations were performed with Acuros BV. We then generated a voxelized phantom of the same geometry, and the materials were modeled according to the vendor’s specifications. MC dose calculations were then performed using our in-house developed fast MC dose engine for HDR brachytherapy (gBMC) on a GPU platform, which is able to simulate both photon transport and electron transport in a voxelized geometry. A phase-space file for the Ir-192 HDR source was used as a source model for MC simulations. Results: Satisfactory agreements between the dose distributions calculated by Acuros BV and those calculated by gBMC were observed in all cases. Quantitatively, we computed point-wise dose difference within the region that receives a dose higher than 10% of the reference dose, defined to be the dose at 5mm outward away from the applicator surface. The mean dose difference was ∼0.45%–0.51% and the 95-percentile maximum difference was ∼1.24%–1.47%. Conclusion: Acuros BV is able to accurately perform dose calculations in HDR brachytherapy with a shielded cylinder applicator.« less

  14. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, N; Shen, C; Tian, Z

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less

  15. A Comparison of Experimental EPMA Data and Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2004-01-01

    Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.

  16. Measuring Virtual Simulations Value in Training Exercises - USMC Use Case

    DTIC Science & Technology

    2015-12-04

    and cost avoidance and Capt Jonathan Richardson, PM TRASYS, who was the primary author for the After-Action Documentation and Analysis Report ...REFERENCES Cermak J. & McGurk M. (2010, July). Putting a Value On Training. McKinsey Quarterly. Retrieved June 10, 2015 from http://www.mckinsey.com...www.hqmc.marines.mil/Portals/142/Docs/2015CPG_Color.pdf Gordon, S. & Cooley, T. (2013) Phase One Final Report : Cost Avoidance Study of USMC Simulation

  17. Multiscale modelling of precipitation in concentrated alloys: from atomistic Monte Carlo simulations to cluster dynamics I thermodynamics

    NASA Astrophysics Data System (ADS)

    Lépinoux, J.; Sigli, C.

    2018-01-01

    In a recent paper, the authors showed how the clusters free energies are constrained by the coagulation probability, and explained various anomalies observed during the precipitation kinetics in concentrated alloys. This coagulation probability appeared to be a too complex function to be accurately predicted knowing only the cluster distribution in Cluster Dynamics (CD). Using atomistic Monte Carlo (MC) simulations, it is shown that during a transformation at constant temperature, after a short transient regime, the transformation occurs at quasi-equilibrium. It is proposed to use MC simulations until the system quasi-equilibrates then to switch to CD which is mean field but not limited by a box size like MC. In this paper, we explain how to take into account the information available before the quasi-equilibrium state to establish guidelines to safely predict the cluster free energies.

  18. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y.

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct featuresmore » of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two different media. Optimization of the simulation parameters and the use of VR techniques saved a significant amount of computation time. Finally, parallelization of the simulations improved even further the calculation time, which reached 1 day for a typical irradiation case envisaged in the forthcoming clinical trials in MRT. An example of MRT treatment in a dog's head is presented, showing the performance of the calculation engine. Conclusions: The development of the first MC-based calculation engine for the future TPS devoted to MRT has been accomplished. This will constitute an essential tool for the future clinical trials on pets at the ESRF. The MC engine is able to calculate dose distributions in micrometer-sized bins in complex voxelized CT structures in a reasonable amount of time. Minimization of the computation time by using several approaches has led to timings that are adequate for pet radiotherapy at synchrotron facilities. The next step will consist in its integration into a user-friendly graphical front-end.« less

  19. Improvements in pencil beam scanning proton therapy dose calculation accuracy in brain tumor cases with a commercial Monte Carlo algorithm.

    PubMed

    Widesott, Lamberto; Lorentini, Stefano; Fracchiolla, Francesco; Farace, Paolo; Schwarz, Marco

    2018-05-04

    validation of a commercial Monte Carlo (MC) algorithm (RayStation ver6.0.024) for the treatment of brain tumours with pencil beam scanning (PBS) proton therapy, comparing it via measurements and analytical calculations in clinically realistic scenarios. Methods: For the measurements a 2D ion chamber array detector (MatriXX PT)) was placed underneath the following targets: 1) anthropomorphic head phantom (with two different thickness) and 2) a biological sample (i.e. half lamb's head). In addition, we compared the MC dose engine vs. the RayStation pencil beam (PB) algorithm clinically implemented so far, in critical conditions such as superficial targets (i.e. in need of range shifter), different air gaps and gantry angles to simulate both orthogonal and tangential beam arrangements. For every plan the PB and MC dose calculation were compared to measurements using a gamma analysis metrics (3%, 3mm). Results: regarding the head phantom the gamma passing rate (GPR) was always >96% and on average > 99% for the MC algorithm; PB algorithm had a GPR ≤90% for all the delivery configurations with single slab (apart 95 % GPR from gantry 0° and small air gap) and in case of two slabs of the head phantom the GPR was >95% only in case of small air gaps for all the three (0°, 45°,and 70°) simulated beam gantry angles. Overall the PB algorithm tends to overestimate the dose to the target (up to 25%) and underestimate the dose to the organ at risk (up to 30%). We found similar results (but a bit worse for PB algorithm) for the two targets of the lamb's head where only two beam gantry angles were simulated. Conclusions: our results suggest that in PBS proton therapy range shifter (RS) need to be used with extreme caution when planning the treatment with an analytical algorithm due to potentially great discrepancies between the planned dose and the dose delivered to the patients, also in case of brain tumours where this issue could be underestimated. Our results also suggest that a MC evaluation of the dose has to be performed every time the RS is used and, mostly, when it is used with large air gaps and beam directions tangential to the patient surface. . © 2018 Institute of Physics and Engineering in Medicine.

  20. Coarse-grained stochastic processes and kinetic Monte Carlo simulators for the diffusion of interacting particles

    NASA Astrophysics Data System (ADS)

    Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2003-11-01

    We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Shoudi; He, Jiansen; Yang, Liping

    The impact of an overtaking fast shock on a magnetic cloud (MC) is a pivotal process in CME–CME (CME: coronal mass ejection) interactions and CME–SIR (SIR: stream interaction region) interactions. MC with a strong and rotating magnetic field is usually deemed a crucial part of CMEs. To study the impact of a fast shock on an MC, we perform a 2.5 dimensional numerical magnetohydrodynamic simulation. Two cases are run in this study: without and with impact by fast shock. In the former case, the MC expands gradually from its initial state and drives a relatively slow magnetic reconnection with themore » ambient magnetic field. Analyses of forces near the core of the MC as a whole body indicates that the solar gravity is quite small compared to the Lorentz force and the pressure gradient force. In the second run, a fast shock propagates, relative to the background plasma, at a speed twice that of the perpendicular fast magnetosonic speed, catches up with and takes over the MC. Due to the penetration of the fast shock, the MC is highly compressed and heated, with the temperature growth rate enhanced by a factor of about 10 and the velocity increased to about half of the shock speed. The magnetic reconnection with ambient magnetic field is also sped up by a factor of two to four in reconnection rate as a result of the enhanced density of the current sheet, which is squeezed by the forward motion of the shocked MC.« less

  2. Quark masses and strong coupling constant in 2+1 flavor QCD

    DOE PAGES

    Maezawa, Y.; Petreczky, P.

    2016-08-30

    We present a determination of the strange, charm and bottom quark masses as well as the strong coupling constant in 2+1 flavor lattice QCD simulations using highly improved staggered quark action. The ratios of the charm quark mass to the strange quark mass and the bottom quark mass to the charm quark mass are obtained from the meson masses calculated on the lattice and found to be mc/ms = 11.877(91) and mb/mc = 4.528(57) in the continuum limit. We also determine the strong coupling constant and the charm quark mass using the moments of pseudoscalar charmonium correlators: α s(μ =more » m c) = 0.3697(85) and mc(μ = mc) = 1.267(12) GeV. Our result for αs corresponds to the determination of the strong coupling constant at the lowest energy scale so far and is translated to the value α s(μ = M Z, n f = 5) = 0.11622(84).« less

  3. SUPERNOVA DRIVING. II. COMPRESSIVE RATIO IN MOLECULAR-CLOUD TURBULENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Liubin; Padoan, Paolo; Haugbølle, Troels

    2016-07-01

    The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of themore » compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.« less

  4. Application of classical simulations for the computation of vibrational properties of free molecules.

    PubMed

    Tikhonov, Denis S; Sharapa, Dmitry I; Schwabedissen, Jan; Rybkin, Vladimir V

    2016-10-12

    In this study, we investigate the ability of classical molecular dynamics (MD) and Monte-Carlo (MC) simulations for modeling the intramolecular vibrational motion. These simulations were used to compute thermally-averaged geometrical structures and infrared vibrational intensities for a benchmark set previously studied by gas electron diffraction (GED): CS 2 , benzene, chloromethylthiocyanate, pyrazinamide and 9,12-I 2 -1,2-closo-C 2 B 10 H 10 . The MD sampling of NVT ensembles was performed using chains of Nose-Hoover thermostats (NH) as well as the generalized Langevin equation thermostat (GLE). The performance of the theoretical models based on the classical MD and MC simulations was compared with the experimental data and also with the alternative computational techniques: a conventional approach based on the Taylor expansion of potential energy surface, path-integral MD and MD with quantum-thermal bath (QTB) based on the generalized Langevin equation (GLE). A straightforward application of the classical simulations resulted, as expected, in poor accuracy of the calculated observables due to the complete neglect of quantum effects. However, the introduction of a posteriori quantum corrections significantly improved the situation. The application of these corrections for MD simulations of the systems with large-amplitude motions was demonstrated for chloromethylthiocyanate. The comparison of the theoretical vibrational spectra has revealed that the GLE thermostat used in this work is not applicable for this purpose. On the other hand, the NH chains yielded reasonably good results.

  5. MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package

    DOE PAGES

    Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...

    2015-11-28

    MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less

  6. The effect of moisture on the shear bond strength of gold alloy rods bonded to enamel with a self-adhesive and a hydrophobic resin cement.

    PubMed

    Dursun, Elisabeth; Wiechmann, Dirk; Attal, Jean-Pierre

    2010-06-01

    The aim of this in vitro study was to investigate the influence of enamel moisture on the shear bond strength (SBS) of a hydrophobic resin cement, Maximum Cure (MC), and a self-adhesive resin cement, Multilink Sprint (MLS), after etching of the enamel. Forty cylindrical gold alloy rods were used to simulate the Incognito lingual bracket system. They were bonded to the enamel of 40 human teeth embedded in self-cured acrylic resin. Twenty were bonded with MC (10 on dry and 10 on wet enamel) and 20 with MLS (10 on dry and 10 on wet enamel). The SBS of MC and MLS was determined in a universal testing machine and the site of bond failure was defined by the adhesive remnant index (ARI). A Kruskal-Wallis test was performed followed by Games-Howell post hoc pairwise comparison tests on the SBS results (P < 0.05) and a chi-square test was used for the analysis of ARI scores (P < 0.05). On dry enamel, no significant differences between MC (58 +/- 5 MPa) and MLS (64 +/- 13 MPa) were noted. On wet enamel, the adherence of MC (6 +/- 8 MPa) and MLS (37 +/- 13 MPa) significantly decreased but to a lesser extent for MLS. The ARI scores corroborated these results. In conclusion, MC did not tolerate moisture. MLS was also affected but maintained sufficient adherence.

  7. Validation of an In-Water, Tower-Shading Correction Scheme

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Doyle, John P.; Zibordi, Giuseppe; vanderLinde, Dirk

    2003-01-01

    Large offshore structures used for the deployment of optical instruments can significantly perturb the intensity of the light field surrounding the optical measurement point, where different portions of the visible spectrum are subject to different shadowing effects. These effects degrade the quality of the acquired optical data and can reduce the accuracy of several derived quantities, such as those obtained by applying bio-optical algorithms directly to the shadow-perturbed data. As a result, optical remote sensing calibration and validation studies can be impaired if shadowing artifacts are not fully accounted for. In this work, the general in-water shadowing problem is examined for a particular case study. Backward Monte Carlo (MC) radiative transfer computations- performed in a vertically stratified, horizontally inhomogeneous, and realistic ocean-atmosphere system are shown to accurately simulate the shadow-induced relative percent errors affecting the radiance and irradiance data profiles acquired close to an oceanographic tower. Multiparameter optical data processing has provided adequate representation of experimental uncertainties allowing consistent comparison with simulations. The more detailed simulations at the subsurface depth appear to be essentially equivalent to those obtained assuming a simplified ocean-atmosphere system, except in highly stratified waters. MC computations performed in the simplified system can be assumed, therefore, to accurately simulate the optical measurements conducted under more complex sampling conditions (i.e., within waters presenting moderate stratification at most). A previously reported correction scheme, based on the simplified MC simulations, and developed for subsurface shadow-removal processing of in-water optical data taken close to the investigated oceanographic tower, is then validated adequately under most experimental conditions. It appears feasible to generalize the present tower-specific approach to solve other optical sensor shadowing problems pertaining to differently shaped deployment platforms, and also including surrounding structures and instrument casings.

  8. Destruction of a Magnetized Star

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-01-01

    What happens when a magnetized star is torn apart by the tidal forces of a supermassive black hole, in a violent process known as a tidal disruption event? Two scientists have broken new ground by simulating the disruption of stars with magnetic fields for the first time.The magnetic field configuration during a simulation of the partial disruption of a star. Top left: pre-disruption star. Bottom left: matter begins to re-accrete onto the surviving core after the partial disruption. Right: vortices form in the core as high-angular-momentum debris continues to accrete, winding up and amplifying the field. [Adapted from Guillochon McCourt 2017]What About Magnetic Fields?Magnetic fields are expected to exist in the majority of stars. Though these fields dont dominate the energy budget of a star the magnetic pressure is a million times weaker than the gas pressure in the Suns interior, for example they are the drivers of interesting activity, like the prominences and flares of our Sun.Given this, we can wonder what role stars magnetic fields might play when the stars are torn apart in tidal disruption events. Do the fields change what we observe? Are they dispersed during the disruption, or can they be amplified? Might they even be responsible for launching jets of matter from the black hole after the disruption?Star vs. Black HoleIn a recent study, James Guillochon (Harvard-Smithsonian Center for Astrophysics) and Michael McCourt (Hubble Fellow at UC Santa Barbara) have tackled these questions by performing the first simulations of tidal disruptions of stars that include magnetic fields.In their simulations, Guillochon and McCourt evolve a solar-mass star that passes close to a million-solar-mass black hole. Their simulations explore different magnetic field configurations for the star, and they consider both what happens when the star barely grazes the black hole and is only partially disrupted, as well as what happens when the black hole tears the star apart completely.Amplifying EncountersFor stars that survive their encounter with the black hole, Guillochon and McCourt find that the process of partial disruption and re-accretion can amplify the magnetic field of the star by up to a factor of 20. Repeated encounters of the star with the black hole could amplify the field even more.The authors suggest an interesting implication of this idea: a population of highly magnetized stars may have formed in our own galactic center, resulting from their encounters with the supermassive black hole Sgr A*.A turbulent magnetic field forms after a partial stellar disruption and re-accretion of the tidal tails. [Adapted from Guillochon McCourt 2017]Effects in DestructionFor stars that are completely shredded and form a tidal stream after their encounter with the black hole, the authors find that the magnetic field geometry straightens within the stream of debris. There, the pressure of the magnetic field eventually dominates over the gas pressure and self-gravity.Guillochon and McCourt find that the fields new configuration isnt ideal for powering jets from the black hole but it is strong enough to influence how the stream interacts with itself and its surrounding environment, likely affecting what we can expect to see from these short-lived events.These simulations have clearly demonstrated the need to further explore the role of magnetic fields in the disruptions of stars by black holes.BonusCheck out the full (brief) video from one of the simulations by Guillochon and McCourt (be sure to watch it in high-res!). It reveals the evolution of a stars magnetic field configuration as the star is partially disrupted by the forces of a supermassive black hole and then re-accretes.CitationJames Guillochon and Michael McCourt 2017 ApJL 834 L19. doi:10.3847/2041-8213/834/2/L19

  9. Carrier trajectory tracking equations for Simple-band Monte Carlo simulation of avalanche multiplication processes

    NASA Astrophysics Data System (ADS)

    Ong, J. S. L.; Charin, C.; Leong, J. H.

    2017-12-01

    Avalanche photodiodes (APDs) with steep electric field gradients generally have low excess noise that arises from carrier multiplication within the internal gain of the devices, and the Monte Carlo (MC) method is among popular device simulation tools for such devices. However, there are few articles relating to carrier trajectory modeling in MC models for such devices. In this work, a set of electric-field-gradient-dependent carrier trajectory tracking equations are developed and used to update the positions of carriers along the path during Simple-band Monte Carlo (SMC) simulations of APDs with non-uniform electric fields. The mean gain and excess noise results obtained from the SMC model employing these equations show good agreement with the results reported for a series of silicon diodes, including a p+n diode with steep electric field gradients. These results confirm the validity and demonstrate the feasibility of the trajectory tracking equations applied in SMC models for simulating mean gain and excess noise in APDs with non-uniform electric fields. Also, the simulation results of mean gain, excess noise, and carrier ionization positions obtained from the SMC model of this work agree well with those of the conventional SMC model employing the concept of a uniform electric field within a carrier free-flight. These results demonstrate that the electric field variation within a carrier free-flight has an insignificant effect on the predicted mean gain and excess noise results. Therefore, both the SMC model of this work and the conventional SMC model can be used to predict the mean gain and excess noise in APDs with highly non-uniform electric fields.

  10. Molecular simulation study of cavity-generated instabilities in the superheated Lennard-Jones liquid

    NASA Astrophysics Data System (ADS)

    Torabi, Korosh; Corti, David S.

    2010-10-01

    Previous equilibrium-based density-functional theory (DFT) analyses of cavity formation in the pure component superheated Lennard-Jones (LJ) liquid [S. Punnathanam and D. S. Corti, J. Chem. Phys. 119, 10224 (2003); M. J. Uline and D. S. Corti, Phys. Rev. Lett. 99, 076102 (2007)] revealed that a thermodynamic limit of stability appears in which no liquidlike density profile can develop for cavity radii greater than some critical size (being a function of temperature and bulk density). The existence of these stability limits was also verified using isothermal-isobaric Monte Carlo (MC) simulations. To test the possible relevance of these limits of stability to a dynamically evolving system, one that may be important for homogeneous bubble nucleation, we perform isothermal-isobaric molecular dynamics (MD) simulations in which cavities of different sizes are placed within the superheated LJ liquid. When the impermeable boundary utilized to generate a cavity is removed, the MD simulations show that the cavity collapses and the overall density of the system remains liquidlike, i.e., the system is stable, when the initial cavity radius is below some certain value. On the other hand, when the initial radius is large enough, the cavity expands and the overall density of the system rapidly decreases toward vaporlike densities, i.e., the system is unstable. Unlike the DFT predictions, however, the transition between stability and instability is not infinitely sharp. The fraction of initial configurations that generate an instability (or a phase separation) increases from zero to unity as the initial cavity radius increases over a relatively narrow range of values, which spans the predicted stability limit obtained from equilibrium MC simulations. The simulation results presented here provide initial evidence that the equilibrium-based stability limits predicted in the previous DFT and MC simulation studies may play some role, yet to be fully determined, in the homogeneous nucleation and growth of embryos within metastable fluids.

  11. Characterization of Compton-scatter imaging with an analytical simulation method

    PubMed Central

    Jones, Kevin C; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V; Chu, James C H

    2018-01-01

    By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140–220 keV, and 40–50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min−1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images. PMID:29243663

  12. Characterization of Compton-scatter imaging with an analytical simulation method

    NASA Astrophysics Data System (ADS)

    Jones, Kevin C.; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V.; Chu, James C. H.

    2018-01-01

    By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140-220 keV, and 40-50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min-1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images.

  13. SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Y; Tian, Z; Jiang, S

    Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized tomore » define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged particle transport in this geometry are currently in progress.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smit, C; Plessis, F du

    Purpose: To extract the electron contamination energy spectra for an Elekta Precise Linac, based on pure photon and measured clinical beam percentage depth dose data. And to include this as an additional source in isource 4 in DOSXYZnrc. Methods: A pure photon beam was simulated for the Linac using isource 4 in the DOSXYZnrc Monte Carlo (MC) code. Percentage depth dose (PDD) data were extracted afterwards for a range of field sizes (FS). These simulated dose data were compared to actual measured dose PDD data, with the data normalized at 10 cm depth. The resulting PDD data resembled the electronmore » contamination depth dose. Since the dose fall-off is a strictly decreasing function, a method was adopted to derive the contamination electron spectrum. Afterwards this spectrum was used in a DOSXYZnrc MC simulation run to verify that the original electron depth dose could be replicated. Results: Various square aperture FS’s for 6, 8 and 15 megavolt (MV) photon beams were modeled, simulated and compared to their respective actual measured PDD data. As FS increased, simulated pure photon depth-dose profiles shifted deeper, thus requiring electron contamination to increase the surface dose. The percentage of electron weight increased with increase in FS. For a FS of 15×15 cm{sup 2}, the percentage electron weight is 0.1%, 0.2% and 0.4% for 6, 8 and 15 MV beams respectively. Conclusion: From the PDD results obtained, an additional electron contamination source was added to the photon source model so that simulation and measured PDD data could match within 2 % / 2 mm gamma-index criteria. The improved source model could assure more accurate simulations of surface doses. This research project was funded by the South African Medical Research Council (MRC) with funds from National Treasury under its Economic Competitiveness and Support package.« less

  15. Magnetic guidance versus manual control: comparison of radiofrequency lesion dimensions and evaluation of the effect of heart wall motion in a myocardial phantom.

    PubMed

    Bhaskaran, Abhishek; Barry, M A Tony; Al Raisi, Sara I; Chik, William; Nguyen, Doan Trang; Pouliopoulos, Jim; Nalliah, Chrishan; Hendricks, Roger; Thomas, Stuart; McEwan, Alistair L; Kovoor, Pramesh; Thiagalingam, Aravinda

    2015-10-01

    Magnetic navigation system (MNS) ablation was suspected to be less effective and unstable in highly mobile cardiac regions compared to radiofrequency (RF) ablations with manual control (MC). The aim of the study was to compare the (1) lesion size and (2) stability of MNS versus MC during irrigated RF ablation with and without simulated mechanical heart wall motion. In a previously validated myocardial phantom, the performance of Navistar RMT Thermocool catheter (Biosense Webster, CA, USA) guided with MNS was compared to manually controlled Navistar irrigated Thermocool catheter (Biosense Webster, CA, USA). The lesion dimensions were compared with the catheter in inferior and superior orientation, with and without 6-mm simulated wall motion. All ablations were performed with 40 W power and 30 ml/ min irrigation for 60 s. A total of 60 ablations were performed. The mean lesion volumes with MNS and MC were 57.5 ± 7.1 and 58.1 ± 7.1 mm(3), respectively, in the inferior catheter orientation (n = 23, p = 0.6), 62.8 ± 9.9 and 64.6 ± 7.6 mm(3), respectively, in the superior catheter orientation (n = 16, p = 0.9). With 6-mm simulated wall motion, the mean lesion volumes with MNS and MC were 60.2 ± 2.7 and 42.8 ± 8.4 mm(3), respectively, in the inferior catheter orientation (n = 11, p = <0.01*), 74.1 ± 5.8 and 54.2 ± 3.7 mm(3), respectively, in the superior catheter orientation (n = 10, p = <0.01*). During 6-mm simulated wall motion, the MC catheter and MNS catheter moved 5.2 ± 0.1 and 0 mm, respectively, in inferior orientation and 5.5 ± 0.1 and 0 mm, respectively, in the superior orientation on the ablation surface. The lesion dimensions were larger with MNS compared to MC in the presence of simulated wall motion, consistent with greater catheter stability. However, similar lesion dimensions were observed in the stationary model.

  16. A fast GPU-based Monte Carlo simulation of proton transport with detailed modeling of nonelastic interactions.

    PubMed

    Wan Chan Tseung, H; Ma, J; Beltran, C

    2015-06-01

    Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically applicable MC-based IMPT treatment planning system. The detailed nuclear modeling will allow us to perform very fast linear energy transfer and neutron dose estimates on the GPU.

  17. Predictive process simulation of cryogenic implants for leading edge transistor design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh

    2012-11-06

    Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less

  18. The origin and reduction of spurious extrahepatic counts observed in 90Y non-TOF PET imaging post radioembolization

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud

    2018-04-01

    Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.

  19. Entrance surface dose distribution and organ dose assessment for cone-beam computed tomography using measurements and Monte Carlo simulations with voxel phantoms

    NASA Astrophysics Data System (ADS)

    Baptista, M.; Di Maria, S.; Vieira, S.; Vaz, P.

    2017-11-01

    Cone-Beam Computed Tomography (CBCT) enables high-resolution volumetric scanning of the bone and soft tissue anatomy under investigation at the treatment accelerator. This technique is extensively used in Image Guided Radiation Therapy (IGRT) for pre-treatment verification of patient position and target volume localization. When employed daily and several times per patient, CBCT imaging may lead to high cumulative imaging doses to the healthy tissues surrounding the exposed organs. This work aims at (1) evaluating the dose distribution during a CBCT scan and (2) calculating the organ doses involved in this image guiding procedure for clinically available scanning protocols. Both Monte Carlo (MC) simulations and measurements were performed. To model and simulate the kV imaging system mounted on a linear accelerator (Edge™, Varian Medical Systems) the state-of-the-art MC radiation transport program MCNPX 2.7.0 was used. In order to validate the simulation results, measurements of the Computed Tomography Dose Index (CTDI) were performed, using standard PMMA head and body phantoms, with 150 mm length and a standard pencil ionizing chamber (IC) 100 mm long. Measurements for head and pelvis scanning protocols, usually adopted in clinical environment were acquired, using two acquisition modes (full-fan and half fan). To calculate the organ doses, the implemented MC model of the CBCT scanner together with a male voxel phantom ("Golem") was used. The good agreement between the MCNPX simulations and the CTDIw measurements (differences up to 17%) presented in this work reveals that the CBCT MC model was successfully validated, taking into account the several uncertainties. The adequacy of the computational model to map dose distributions during a CBCT scan is discussed in order to identify ways to reduce the total CBCT imaging dose. The organ dose assessment highlights the need to evaluate the therapeutic and the CBCT imaging doses, in a more balanced approach, and the importance of improving awareness regarding the increased risk, arising from repeated exposures.

  20. MrBayes tgMC3++: A High Performance and Resource-Efficient GPU-Oriented Phylogenetic Analysis Method.

    PubMed

    Ling, Cheng; Hamada, Tsuyoshi; Gao, Jingyang; Zhao, Guoguang; Sun, Donghong; Shi, Weifeng

    2016-01-01

    MrBayes is a widespread phylogenetic inference tool harnessing empirical evolutionary models and Bayesian statistics. However, the computational cost on the likelihood estimation is very expensive, resulting in undesirably long execution time. Although a number of multi-threaded optimizations have been proposed to speed up MrBayes, there are bottlenecks that severely limit the GPU thread-level parallelism of likelihood estimations. This study proposes a high performance and resource-efficient method for GPU-oriented parallelization of likelihood estimations. Instead of having to rely on empirical programming, the proposed novel decomposition storage model implements high performance data transfers implicitly. In terms of performance improvement, a speedup factor of up to 178 can be achieved on the analysis of simulated datasets by four Tesla K40 cards. In comparison to the other publicly available GPU-oriented MrBayes, the tgMC 3 ++ method (proposed herein) outperforms the tgMC 3 (v1.0), nMC 3 (v2.1.1) and oMC 3 (v1.00) methods by speedup factors of up to 1.6, 1.9 and 2.9, respectively. Moreover, tgMC 3 ++ supports more evolutionary models and gamma categories, which previous GPU-oriented methods fail to take into analysis.

  1. A novel Monte Carlo algorithm for simulating crystals with McStas

    NASA Astrophysics Data System (ADS)

    Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.

    2004-07-01

    We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.

  2. Statistical thermodynamics of aligned rigid rods with attractive lateral interactions: Theory and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    dos Santos, G. J.; Linares, D. H.; Ramirez-Pastor, A. J.

    2018-04-01

    The phase behaviour of aligned rigid rods of length k (k-mers) adsorbed on two-dimensional square lattices has been studied by Monte Carlo (MC) simulations and histogram reweighting technique. The k-mers, containing k identical units (each one occupying a lattice site) were deposited along one of the directions of the lattice. In addition, attractive lateral interactions were considered. The methodology was applied, particularly, to the study of the critical point of the condensation transition occurring in the system. The process was monitored by following the fourth order Binder cumulant as a function of temperature for different lattice sizes. The results, obtained for k ranging from 2 to 7, show that: (i) the transition coverage exhibits a decreasing behaviour when it is plotted as a function of the k-mer size and (ii) the transition temperature, Tc, exhibits a power law dependence on k, Tc ∼k 0 , 4, shifting to higher values as k increases. Comparisons with an analytical model based on a generalization of the Bragg-Williams approximation (BWA) were performed in order to support the simulation technique. A significant qualitative agreement was obtained between BWA and MC results.

  3. On the development of a comprehensive MC simulation model for the Gamma Knife Perfexion radiosurgery unit

    NASA Astrophysics Data System (ADS)

    Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.

    2016-02-01

    This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819  ±  0.004 and 0.8941  ±  0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX simulation model should be benchmarked in terms of both RDP and OF results.

  4. Radiation Measurements in Simulated Ablation Layers

    DTIC Science & Technology

    2010-12-06

    J.Spacecraft & Rockets, V35, No 6, 1998, pp 729-735. D‟Souza MG, Eichmann TN, Mudford NR, Potter DF, Morgan RG, McIntyre TJ, Jacobs PA (2009...gases. D. Phil Thesis. Oxford University 1976 Potter, D., Eichmann , T., Brandis, A., Morgan, R., Jacobs, P., McIntyre, T., “Simulation of radiating...Heatshield Material. 46th AIAA Aerospace Sciences Meeting and Exhibit, AIAA2008-1202, Reno, USA. D‟Souza, M.G., Eichmann , T.N., Mudford, N.R., Potter

  5. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  6. A new method for shape and texture classification of orthopedic wear nanoparticles.

    PubMed

    Zhang, Dongning; Page, Janet R; Kavanaugh, Aaron E; Billi, Fabrizio

    2012-09-27

    Detailed morphologic analysis of particles produced during wear of orthopedic implants is important in determining a correlation among material, wear, and biological effects. However, the use of simple shape descriptors is insufficient to categorize the data and to compare the nature of wear particles generated by different implants. An approach based on Discrete Fourier Transform (DFT) is presented for describing particle shape and surface texture. Four metal-on-metal bearing couples were tested in an orbital wear simulator under standard and adverse (steep-angled cups) wear simulator conditions. Digitized Scanning Electron Microscope (SEM) images of the wear particles were imported into MATLAB to carry out Fourier descriptor calculations via a specifically developed algorithm. The descriptors were then used for studying particle characteristics (shape and texture) as well as for cluster classification. Analysis of the particles demonstrated the validity of the proposed model by showing that steep-angle Co-Cr wear particles were more asymmetric, compressed, extended, triangular, square, and roughened at 3 Mc than after 0.25 Mc. In contrast, particles from standard angle samples were only more compressed and extended after 3 Mc compared to 0.25 Mc. Cluster analysis revealed that the 0.25 Mc steep-angle particle distribution was a subset of the 3 Mc distribution.

  7. Validation of the Oncentra Brachy Advanced Collapsed cone Engine for a commercial (192)Ir source using heterogeneous geometries.

    PubMed

    Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc

    2015-01-01

    To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  8. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    PubMed

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-09-10

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.

  9. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor

    PubMed Central

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  10. Multicanonical molecular dynamics simulations combined with Metadynamics for the free energy landscape of a biomolecular system with high energy barriers

    NASA Astrophysics Data System (ADS)

    Yonezawa, Yasushige; Shimoyama, Hiromitsu; Nakamura, Haruki

    2011-01-01

    Multicanonical molecular-dynamics (McMD) simulation and Metadynamics (MetaD) are useful for obtaining the free-energies, and can be mutually complementary. We combined McMD with MetaD, and applied it to the conformational free energy calculations of a proline dipeptide. First, MetaD was performed along the dihedral angle at the prolyl bond and we obtained a coarse biasing potential. After adding the biasing potential to the dihedral angle potential energy, we conducted McMD with the modified potential energy. Enhanced sampling was achieved for all degrees-of-freedom, and the sampling of the dihedral angle space was facilitated. After reweighting, we obtained an accurate free energy landscape.

  11. Newly developed photon-cell interactive Monte Carlo (pciMC) simulation for non-invasive and continuous diagnosis of blood during extracorporeal circulation support

    NASA Astrophysics Data System (ADS)

    Sakota, Daisuke; Takatani, Setsuo

    2011-07-01

    We have sought for non-invasive diagnosis of blood during the extracorporeal circulation support. To achieve the goal, we have newly developed a photon-cell interactive Monte Carlo (pciMC) model for optical propagation through blood. The pciMC actually describes the interaction of photons with 3-dimentional biconcave RBCs. The scattering is described by micro-scopical RBC boundary condition based on geometric optics. By using pciMC, we modeled the RBCs inside the extracorporeal circuit will be oriented by the blood flow. The RBCs' orientation was defined as their long axis being directed to the center of the circulation tube. Simultaneously the RBCs were allowed to randomly rotate about the long axis direction. As a result, as flow rate increased, the orientation rate increased and converged to approximately 22% at 0.5 L/min flow rate and above. And finally, by using this model, the pciMC non-invasively and absolutely predicted Hct and hemoglobin with the accuracies of 0.84+/-0.82 [HCT%] and 0.42+/-0.28 [g/dL] respectively against measurements by a blood gas analyzer.

  12. An energy function for dynamics simulations of polypeptides in torsion angle space

    NASA Astrophysics Data System (ADS)

    Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.

    1998-05-01

    Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.

  13. The isotropic-nematic phase transition of tangent hard-sphere chain fluids—Pure components

    NASA Astrophysics Data System (ADS)

    van Westen, Thijs; Oyarzún, Bernardo; Vlugt, Thijs J. H.; Gross, Joachim

    2013-07-01

    An extension of Onsager's second virial theory is developed to describe the isotropic-nematic phase transition of tangent hard-sphere chain fluids. Flexibility is introduced by the rod-coil model. The effect of chain-flexibility on the second virial coefficient is described using an accurate, analytical approximation for the orientation-dependent pair-excluded volume. The use of this approximation allows for an analytical treatment of intramolecular flexibility by using a single pure-component parameter. Two approaches to approximate the effect of the higher virial coefficients are considered, i.e., the Vega-Lago rescaling and Scaled Particle Theory (SPT). The Onsager trial function is employed to describe the orientational distribution function. Theoretical predictions for the equation of state and orientational order parameter are tested against the results from Monte Carlo (MC) simulations. For linear chains of length 9 and longer, theoretical results are in excellent agreement with MC data. For smaller chain lengths, small errors introduced by the approximation of the higher virial coefficients become apparent, leading to a small under- and overestimation of the pressure and density difference at the phase transition, respectively. For rod-coil fluids of reasonable rigidity, a quantitative comparison between theory and MC simulations is obtained. For more flexible chains, however, both the Vega-Lago rescaling and SPT lead to a small underestimation of the location of the phase transition.

  14. Systematic discrepancies in Monte Carlo predictions of k-ratios emitted from thin films on substrates

    NASA Astrophysics Data System (ADS)

    Statham, P.; Llovet, X.; Duncumb, P.

    2012-03-01

    We have assessed the reliability of different Monte Carlo simulation programmes using the two available Bastin-Heijligers databases of thin-film measurements by EPMA. The MC simulation programmes tested include Curgenven-Duncumb MSMC, NISTMonte, Casino and PENELOPE. Plots of the ratio of calculated to measured k-ratios ("kcalc/kmeas") against various parameters reveal error trends that are not apparent in simple error histograms. The results indicate that the MC programmes perform quite differently on the same dataset. However, they appear to show a similar pronounced trend with a "hockey stick" shape in the "kcalc/kmeas versus kmeas" plots. The most sophisticated programme PENELOPE gives the closest correspondence with experiment but still shows a tendency to underestimate experimental k-ratios by 10 % for films that are thin compared to the electron range. We have investigated potential causes for this systematic behaviour and extended the study to data not collected by Bastin and Heijligers.

  15. Simulation study on characteristics of long-range interaction in randomly asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying

    2015-04-01

    In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.

  16. Microcanonical ensemble simulation method applied to discrete potential fluids

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  17. Simulation of projected water demand and ground-water levels in the Coffee Sand and Eutaw-McShan aquifers in Union County, Mississippi, 2010 through 2050

    USGS Publications Warehouse

    Hutson, Susan S.; Strom, E.W.; Burt, D.E.; Mallory, M.J.

    2000-01-01

    Ground water from the Eutaw-McShan and the Coffee Sand aquifers is the major source of supply for residential, commercial, and industrial purposes in Union County, Mississippi. Unbiased, scientifically sound data and assessments are needed to assist agencies in better understanding and managing available water resources as continuing development and growth places more stress on available resources. The U.S. Geological Survey, in cooperation with the Tennessee Valley Authority, conducted an investigation using water-demand and ground-water models to evaluate the effect of future water demand on groundwater levels. Data collected for the 12 public-supply facilities and the self-supplied commercial and industrial facilities in Union County were used to construct water-demand models. The estimates of water demand to year 2050 were then input to a ground-water model based on the U.S. Geological Survey finite-difference computer code, MODFLOW. Total ground-water withdrawals for Union County in 1998 were estimated as 2.85 million gallons per day (Mgal/d). Of that amount, municipal withdrawals were 2.55 Mgal/d with about 1.50 Mgal/d (59 percent) delivered to residential users. Nonmunicipal withdrawals were 0.296 Mgal/d. About 80 percent (2.27 Mgal/d) of the total ground-water withdrawal is produced from the Eutaw-McShan aquifer and about 13 percent (0.371 Mgal/d) from the Coffee Sand aquifer. Between normal- and high-growth conditions, total water demand could increase from 72 to 131 percent (2.9 Mgal/d in 1998 to 6.7 Mgal/d in year 2050) with municipal demand increasing from 77 to 146 percent (2.6 to 6.4 Mgal/d). Increased pumping to meet the demand for water was simulated to determine the effect on water levels in the Coffee Sand and Eutaw- McShan aquifers. Under baseline-growth conditions, increased water use by year 2050 could result in an additional 65 feet of drawdown in the New Albany area below year 2000 water levels in the Coffee Sand aquifer and about 120 feet of maximum drawdown in the Eutaw-McShan aquifer. Under normal-growth conditions, increased water use could result in an additional 65 feet of drawdown in the New Albany area below year 2000 water levels in the Coffee Sand aquifer and about 135 feet of maximum drawdown in the Eutaw-McShan aquifer. Under high-growth conditions, increased water use could result in 75 feet of drawdown in the New Albany area below year 2000 water levels in the Coffee Sand aquifer and about 190 feet of maximum drawdown in the Eutaw-McShan aquifer. The resulting highgrowth projected water level for the year 2050 at the center of the drawdown cone in the New Albany area is between 450 and 500 feet above the top of the Eutaw-McShan aquifer.

  18. Monte Carlo investigation of I-125 interseed attenuation for standard and thinner seeds in prostate brachytherapy with phantom validation using a MOSFET.

    PubMed

    Mason, J; Al-Qaisieh, B; Bownes, P; Henry, A; Thwaites, D

    2013-03-01

    In permanent seed implant prostate brachytherapy the actual dose delivered to the patient may be less than that calculated by TG-43U1 due to interseed attenuation (ISA) and differences between prostate tissue composition and water. In this study the magnitude of the ISA effect is assessed in a phantom and in clinical prostate postimplant cases. Results are compared for seed models 6711 and 9011 with 0.8 and 0.5 mm diameters, respectively. A polymethyl methacrylate (PMMA) phantom was designed to perform ISA measurements in a simple eight-seed arrangement and at the center of an implant of 36 seeds. Monte Carlo (MC) simulation and experimental measurements using a MOSFET dosimeter were used to measure dose rate and the ISA effect. MC simulations of 15 CT-based postimplant prostate treatment plans were performed to compare the clinical impact of ISA on dose to prostate, urethra, rectum, and the volume enclosed by the 100% isodose, for 6711 and 9011 seed models. In the phantom, ISA reduced the dose rate at the MOSFET position by 8.6%-18.3% (6711) and 7.8%-16.7% (9011) depending on the measurement configuration. MOSFET measured dose rates agreed with MC simulation predictions within the MOSFET measurement uncertainty, which ranged from 5.5% to 7.2% depending on the measurement configuration (k = 1, for the mean of four measurements). For 15 clinical implants, the mean ISA effect for 6711 was to reduce prostate D90 by 4.2 Gy (3%), prostate V100 by 0.5 cc (1.4%), urethra D10 by 11.3 Gy (4.4%), rectal D2cc by 5.5 Gy (4.6%), and the 100% isodose volume by 2.3 cc. For the 9011 seed the mean ISA effect reduced prostate D90 by 2.2 Gy (1.6%), prostate V100 by 0.3 cc (0.7%), urethra D10 by 8.0 Gy (3.2%), rectal D2cc by 3.1 Gy (2.7%), and the 100% isodose volume by 1.2 cc. Differences between the MC simulation and TG-43U1 consensus data for the 6711 seed model had a similar impact, reducing mean prostate D90 by 6 Gy (4.2%) and V100 by 0.6 cc (1.8%). ISA causes the delivered dose in prostate seed implant brachytherapy to be lower than the dose calculated by TG-43U1. MC simulation of phantom seed arrangements show that dose at a point can be reduced by up to 18% and this has been validated using a MOSFET dosimeter. Clinical simulations show that ISA reduces DVH parameter values, but the reduction is less for thinner seeds.

  19. Patient-specific Monte Carlo-based dose-kernel approach for inverse planning in afterloading brachytherapy.

    PubMed

    D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc

    2011-12-01

    Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Kinetic Monte Carlo simulation of the efficiency roll-off, emission color, and degradation of organic light-emitting diodes (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Coehoorn, Reinder; van Eersel, Harm; Bobbert, Peter A.; Janssen, Rene A. J.

    2015-10-01

    The performance of Organic Light Emitting Diodes (OLEDs) is determined by a complex interplay of the charge transport and excitonic processes in the active layer stack. We have developed a three-dimensional kinetic Monte Carlo (kMC) OLED simulation method which includes all these processes in an integral manner. The method employs a physically transparent mechanistic approach, and is based on measurable parameters. All processes can be followed with molecular-scale spatial resolution and with sub-nanosecond time resolution, for any layer structure and any mixture of materials. In the talk, applications to the efficiency roll-off, emission color and lifetime of white and monochrome phosphorescent OLEDs [1,2] are demonstrated, and a comparison with experimental results is given. The simulations show to which extent the triplet-polaron quenching (TPQ) and triplet-triplet-annihilation (TTA) contribute to the roll-off, and how the microscopic parameters describing these processes can be deduced properly from dedicated experiments. Degradation is treated as a result of the (accelerated) conversion of emitter molecules to non-emissive sites upon a triplet-polaron quenching (TPQ) process. The degradation rate, and hence the device lifetime, is shown to depend on the emitter concentration and on the precise type of TPQ process. Results for both single-doped and co-doped OLEDs are presented, revealing that the kMC simulations enable efficient simulation-assisted layer stack development. [1] H. van Eersel et al., Appl. Phys. Lett. 105, 143303 (2014). [2] R. Coehoorn et al., Adv. Funct. Mater. (2015), publ. online (DOI: 10.1002/adfm.201402532)

  1. Stability of nanocrystalline Ni-based alloys: coupling Monte Carlo and molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Waseda, O.; Goldenstein, H.; Silva, G. F. B. Lenz e.; Neiva, A.; Chantrenne, P.; Morthomas, J.; Perez, M.; Becquart, C. S.; Veiga, R. G. A.

    2017-10-01

    The thermal stability of nanocrystalline Ni due to small additions of Mo or W (up to 1 at%) was investigated in computer simulations by means of a combined Monte Carlo (MC)/molecular dynamics (MD) two-steps approach. In the first step, energy-biased on-lattice MC revealed segregation of the alloying elements to grain boundaries. However, the condition for the thermodynamic stability of these nanocrystalline Ni alloys (zero grain boundary energy) was not fulfilled. Subsequently, MD simulations were carried out for up to 0.5 μs at 1000 K. At this temperature, grain growth was hindered for minimum global concentrations of 0.5 at% W and 0.7 at% Mo, thus preserving most of the nanocrystalline structure. This is in clear contrast to a pure Ni model system, for which the transformation into a monocrystal was observed in MD simulations within 0.2 μs at the same temperature. These results suggest that grain boundary segregation of low-soluble alloying elements in low-alloyed systems can produce high-temperature metastable nanocrystalline materials. MD simulations carried out at 1200 K for 1 at% Mo/W showed significant grain boundary migration accompanied by some degree of solute diffusion, thus providing additional evidence that solute drag mostly contributed to the nanostructure stability observed at lower temperature.

  2. kmos: A lattice kinetic Monte Carlo framework

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  3. Partial protection against multiple RT-SHIV162P3 vaginal challenge of rhesus macaques by a silicone elastomer vaginal ring releasing the NNRTI MC1220

    PubMed Central

    Fetherston, Susan M.; Geer, Leslie; Veazey, Ronald S.; Goldman, Laurie; Murphy, Diarmaid J.; Ketas, Thomas J.; Klasse, Per Johan; Blois, Sylvain; La Colla, Paolo; Moore, John P.; Malcolm, R. Karl

    2013-01-01

    Objectives The non-nucleoside reverse transcriptase inhibitor MC1220 has potent in vitro activity against HIV type 1 (HIV-1). A liposome gel formulation of MC1220 has previously been reported to partially protect rhesus macaques against vaginal challenge with a simian HIV (SHIV). Here, we describe the pre-clinical development of an MC1220-releasing silicone elastomer vaginal ring (SEVR), including pharmacokinetic (PK) and efficacy studies in macaques. Methods In vitro release studies were conducted on SEVRs loaded with 400 mg of MC1220, using simulated vaginal fluid (SVF, n = 4) and 1 : 1 isopropanol/water (IPA/H2O, n = 4) as release media. For PK evaluation, SEVRs were inserted into adult female macaques (n = 6) for 30 days. Following a 1week washout period, fresh rings were placed in the same animals, which were then challenged vaginally with RT-SHIV162P3 once weekly for 4 weeks. Results SEVRs released 1.66 and 101 mg of MC1220 into SVF and IPA/H2O, respectively, over 30 days, the differential reflecting the low aqueous solubility of the drug. In macaque PK studies, MC1220 was consistently detected in vaginal fluid (peak 845 ng/mL) and plasma (peak 0.91 ng/mL). Kaplan–Meier analysis over 9weeks showed significantly lower infection rates for animals given MC1220-containing SEVRs than placebo rings (hazard ratio 0.20, P = 0.0037). Conclusions An MC1220-releasing SEVR partially protected macaques from vaginal challenge. Such ring devices are a practical method for providing sustained, coitally independent protection against vaginal exposure to HIV-1. PMID:23109186

  4. SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, N; Tian, Z; Pompos, A

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondarymore » electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.« less

  5. Comparison of heavy-ion- and electron-beam upset data for GaAS SRAMS. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flesner, L.D.; Zuleeg, R.; Kolasinski, W.A.

    1992-07-16

    We report the results of experiments designed to evaluate the extent to which focused electron-beam pulses simulate energetic ion upset phenomena in GaAs memory circuits fabricated by the McDonnell Douglas Astronautics Company. The results of two experimental methods were compared, irradiation by heavy-ion particle beams, and upset mapping using focused electron pulses. Linear energy transfer (LET) thresholds and upset cross sections are derived from the data for both methods. A comparison of results shows good agreement, indicating that for these circuits electron-beam pulse mapping is a viable simulation technique.

  6. Episcleral eye plaque dosimetry comparison for the Eye Physics EP917 using Plaque Simulator and Monte Carlo simulation

    PubMed Central

    Amoush, Ahmad; Wilkinson, Douglas A.

    2015-01-01

    This work is a comparative study of the dosimetry calculated by Plaque Simulator, a treatment planning system for eye plaque brachytherapy, to the dosimetry calculated using Monte Carlo simulation for an Eye Physics model EP917 eye plaque. Monte Carlo (MC) simulation using MCNPX 2.7 was used to calculate the central axis dose in water for an EP917 eye plaque fully loaded with 17 IsoAid Advantage  125I seeds. In addition, the dosimetry parameters Λ, gL(r), and F(r,θ) were calculated for the IsoAid Advantage model IAI‐125  125I seed and benchmarked against published data. Bebig Plaque Simulator (PS) v5.74 was used to calculate the central axis dose based on the AAPM Updated Task Group 43 (TG‐43U1) dose formalism. The calculated central axis dose from MC and PS was then compared. When the MC dosimetry parameters for the IsoAid Advantage  125I seed were compared with the consensus values, Λ agreed with the consensus value to within 2.3%. However, much larger differences were found between MC calculated gL(r) and F(r,θ) and the consensus values. The differences between MC‐calculated dosimetry parameters are much smaller when compared with recently published data. The differences between the calculated central axis absolute dose from MC and PS ranged from 5% to 10% for distances between 1 and 12 mm from the outer scleral surface. When the dosimetry parameters for the  125I seed from this study were used in PS, the calculated absolute central axis dose differences were reduced by 2.3% from depths of 4 to 12 mm from the outer scleral surface. We conclude that PS adequately models the central dose profile of this plaque using its defaults for the IsoAid model IAI‐125 at distances of 1 to 7 mm from the outer scleral surface. However, improved dose accuracy can be obtained by using updated dosimetry parameters for the IsoAid model IAI‐125  125I seed. PACS number: 87.55.K‐ PMID:26699577

  7. MC3, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cawkwell, Marc Jon

    2016-09-09

    The MC3 code is used to perform Monte Carlo simulations in the isothermal-isobaric ensemble (constant number of particles, temperature, and pressure) on molecular crystals. The molecules within the periodic simulation cell are treated as rigid bodies, alleviating the requirement for a complex interatomic potential. Intermolecular interactions are described using generic, atom-centered pair potentials whose parameterization is taken from the literature [D. E. Williams, J. Comput. Chem., 22, 1154 (2001)] and electrostatic interactions arising from atom-centered, fixed, point partial charges. The primary uses of the MC3 code are the computation of i) the temperature and pressure dependence of lattice parameters andmore » thermal expansion coefficients, ii) tensors of elastic constants and compliances via the Parrinello and Rahman’s fluctuation formula [M. Parrinello and A. Rahman, J. Chem. Phys., 76, 2662 (1982)], and iii) the investigation of polymorphic phase transformations. The MC3 code is written in Fortran90 and requires LAPACK and BLAS linear algebra libraries to be linked during compilation. Computationally expensive loops are accelerated using OpenMP.« less

  8. Development of the McGill simulator for endoscopic sinus surgery: a new high-fidelity virtual reality simulator for endoscopic sinus surgery.

    PubMed

    Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A

    2014-01-01

    The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.

  9. Automatic insertion of simulated microcalcification clusters in a software breast phantom

    NASA Astrophysics Data System (ADS)

    Shankla, Varsha; Pokrajac, David D.; Weinstein, Susan P.; DeLeo, Michael; Tuite, Catherine; Roth, Robyn; Conant, Emily F.; Maidment, Andrew D.; Bakic, Predrag R.

    2014-03-01

    An automated method has been developed to insert realistic clusters of simulated microcalcifications (MCs) into computer models of breast anatomy. This algorithm has been developed as part of a virtual clinical trial (VCT) software pipeline, which includes the simulation of breast anatomy, mechanical compression, image acquisition, image processing, display and interpretation. An automated insertion method has value in VCTs involving large numbers of images. The insertion method was designed to support various insertion placement strategies, governed by probability distribution functions (pdf). The pdf can be predicated on histological or biological models of tumor growth, or estimated from the locations of actual calcification clusters. To validate the automated insertion method, a 2-AFC observer study was designed to compare two placement strategies, undirected and directed. The undirected strategy could place a MC cluster anywhere within the phantom volume. The directed strategy placed MC clusters within fibroglandular tissue on the assumption that calcifications originate from epithelial breast tissue. Three radiologists were asked to select between two simulated phantom images, one from each placement strategy. Furthermore, questions were posed to probe the rationale behind the observer's selection. The radiologists found the resulting cluster placement to be realistic in 92% of cases, validating the automated insertion method. There was a significant preference for the cluster to be positioned on a background of adipose or mixed adipose/fibroglandular tissues. Based upon these results, this automated lesion placement method will be included in our VCT simulation pipeline.

  10. Computational model for simulation of sequences of helicity and angular momentum transfer in turbid tissue-like scattering medium (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2017-02-01

    Current report considers development of a unified Monte Carlo (MC) -based computational model for simulation of propagation of Laguerre-Gaussian (LG) beams in turbid tissue-like scattering medium. With a primary goal to proof the concept of using complex light for tissue diagnosis we explore propagation of LG beams in comparison with Gaussian beams for both linear and circular polarization. MC simulations of radially and azimuthally polarized LG beams in turbid media have been performed, classic phenomena such as preservation of the orbital angular momentum, optical memory and helicity flip are observed, detailed comparison is presented and discussed.

  11. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  12. WE-H-BRA-04: Biological Geometries for the Monte Carlo Simulation Toolkit TOPASNBio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNamara, A; Held, K; Paganetti, H

    2016-06-15

    Purpose: New advances in radiation therapy are most likely to come from the complex interface of physics, chemistry and biology. Computational simulations offer a powerful tool for quantitatively investigating radiation interactions with biological tissue and can thus help bridge the gap between physics and biology. The aim of TOPAS-nBio is to provide a comprehensive tool to generate advanced radiobiology simulations. Methods: TOPAS wraps and extends the Geant4 Monte Carlo (MC) simulation toolkit. TOPAS-nBio is an extension to TOPAS which utilizes the physics processes in Geant4-DNA to model biological damage from very low energy secondary electrons. Specialized cell, organelle and molecularmore » geometries were designed for the toolkit. Results: TOPAS-nBio gives the user the capability of simulating biological geometries, ranging from the micron-scale (e.g. cells and organelles) to complex nano-scale geometries (e.g. DNA and proteins). The user interacts with TOPAS-nBio through easy-to-use input parameter files. For example, in a simple cell simulation the user can specify the cell type and size as well as the type, number and size of included organelles. For more detailed nuclear simulations, the user can specify chromosome territories containing chromatin fiber loops, the later comprised of nucleosomes on a double helix. The chromatin fibers can be arranged in simple rigid geometries or within factual globules, mimicking realistic chromosome territories. TOPAS-nBio also provides users with the capability of reading protein data bank 3D structural files to simulate radiation damage to proteins or nucleic acids e.g. histones or RNA. TOPAS-nBio has been validated by comparing results to other track structure simulation software and published experimental measurements. Conclusion: TOPAS-nBio provides users with a comprehensive MC simulation tool for radiobiological simulations, giving users without advanced programming skills the ability to design and run complex simulations.« less

  13. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 05: Not all geometries are equivalent for magnetic field Fano cavity tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malkov, Victor N.; Rogers, David W.O.

    The coupling of MRI and radiation treatment systems for the application of magnetic resonance guided radiation therapy necessitates a reliable magnetic field capable Monte Carlo (MC) code. In addition to the influence of the magnetic field on dose distributions, the question of proper calibration has arisen due to the several percent variation of ion chamber and solid state detector responses in magnetic fields when compared to the 0 T case (Reynolds et al., Med Phys, 2013). In the absence of a magnetic field, EGSnrc has been shown to pass the Fano cavity test (a rigorous benchmarking tool of MC codes)more » at the 0.1 % level (Kawrakow, Med.Phys, 2000), and similar results should be required of magnetic field capable MC algorithms. To properly test such developing MC codes, the Fano cavity theorem has been adapted to function in a magnetic field (Bouchard et al., PMB, 2015). In this work, the Fano cavity test is applied in a slab and ion-chamber-like geometries to test the transport options of an implemented magnetic field algorithm in EGSnrc. Results show that the deviation of the MC dose from the expected Fano cavity theory value is highly sensitive to the choice of geometry, and the ion chamber geometry appears to pass the test more easily than larger slab geometries. As magnetic field MC codes begin to be used for dose simulations and correction factor calculations, care must be taken to apply the most rigorous Fano test geometries to ensure reliability of such algorithms.« less

  14. Inference of Markovian properties of molecular sequences from NGS data and applications to comparative genomics.

    PubMed

    Ren, Jie; Song, Kai; Deng, Minghua; Reinert, Gesine; Cannon, Charles H; Sun, Fengzhu

    2016-04-01

    Next-generation sequencing (NGS) technologies generate large amounts of short read data for many different organisms. The fact that NGS reads are generally short makes it challenging to assemble the reads and reconstruct the original genome sequence. For clustering genomes using such NGS data, word-count based alignment-free sequence comparison is a promising approach, but for this approach, the underlying expected word counts are essential.A plausible model for this underlying distribution of word counts is given through modeling the DNA sequence as a Markov chain (MC). For single long sequences, efficient statistics are available to estimate the order of MCs and the transition probability matrix for the sequences. As NGS data do not provide a single long sequence, inference methods on Markovian properties of sequences based on single long sequences cannot be directly used for NGS short read data. Here we derive a normal approximation for such word counts. We also show that the traditional Chi-square statistic has an approximate gamma distribution ,: using the Lander-Waterman model for physical mapping. We propose several methods to estimate the order of the MC based on NGS reads and evaluate those using simulations. We illustrate the applications of our results by clustering genomic sequences of several vertebrate and tree species based on NGS reads using alignment-free sequence dissimilarity measures. We find that the estimated order of the MC has a considerable effect on the clustering results ,: and that the clustering results that use a N: MC of the estimated order give a plausible clustering of the species. Our implementation of the statistics developed here is available as R package 'NGS.MC' at http://www-rcf.usc.edu/∼fsun/Programs/NGS-MC/NGS-MC.html fsun@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Complex damage distribution behaviour in cobalt implanted rutile TiO2 (1 1 0) lattice

    NASA Astrophysics Data System (ADS)

    Joshi, Shalik Ram; Padmanabhan, B.; Chanda, Anupama; Ojha, Sunil; Kanjilal, D.; Varma, Shikha

    2017-11-01

    The present work investigates the radiation damage, amorphization and structural modifications that are produced by ion-solid interactions in TiO2 crystals during 200 keV Cobalt ion implantation. RBS/C and GIXRD have been utilized to evaluate the damage in the host lattice as a function of ion fluence. Multiple scattering formalism has been applied to extract the depth dependent damage distributions in TiO2(1 1 0). The results have been compared with the MC simulations performed using SRIM-2013. RBS/C results delineate a buried amorphous layer at a low fluence. Surprisingly, ion induced dynamic activation produces a recovery in this damage at higher fluences. This improvement interestingly occurs only in deep regions (60-300 nm) where a systematic lowering in damage with fluence is observed. Formation of Co-Ti-O phases and generation of stress in TiO2 lattice can also be responsible for this improvement in deep regions. In contrast, surface region (0-60 nm) indicates a gradual increase in damage with fluence. Such a switch in the damage behavior creates a cross point in damage profiles at 60 nm. Surface region is a sink of vacancies whereas deep layers are interstitial rich. However, these regions are far separated from each other resulting in an intermediate (100-150 nm) region with a significant dip (valley) in damage which can be characterized by enhanced recombination of point defects. The damage profiles thus indicate a very complex behavior. MC simulations, however, present very different results. They depict a damage profile that extends to a depth of only 150 nm, which is only about half of the damage- width observed here via RBS/C. Moreover, MC simulations do not indicate presence of any valley like structure in the damage profile. The complex nature of damage distribution observed here via RBS/C may be related to the high ionic nature of the chemical bonds in the TiO2 lattice.

  16. TU-H-BRC-09: Validation of a Novel Therapeutic X-Ray Array Source and Collimation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trovati, S; King, GJ; Loo, BW

    2016-06-15

    Purpose: We have experimentally characterized and simulated the dosimetric properties and spatial fidelity of a novel X-ray array source and collimation system called SPHINX that has the potential to generate complex intensity modulated X-ray beams by varying the electron beam intensity only, and without any moving parts like in multi-leaf collimators. Methods: We investigated the spatial fidelity and the X-ray performances of a SPHINX prototype in tungsten, using a Cyber Knife and the experimental high-energy electron beam line at XTA at SLAC National Laboratory. Dose distributions were recorded with gafchromic films, placed at the distal end of SPHINX and atmore » several depths in a solid water phantom. The geometry of SPHINX and of the experimental set-ups was also modeled in Monte Carlo (MC) simulations with the FLUKA code, used to reproduce the experimental results and, after validation, to predict and optimize the performance and design of the SPHINX. Results: The results indicate significant particle leakage through the channels during a single-channel irradiation for high incident energies, followed by a rapid decrease for energies of clinical interest. When the collimator channels are used as target, the photon production increases, however at expense of the beam size that is also enlarged. The illumination of all channels simultaneously shows a fairly even transmission of the beam. Conclusion: With the measurements we have verified the MC models and the uniformity of beam transmission through SPHINX, and we have evaluated the importance of particle leakage through adjacent channels. These results can be used to optimize SPHINX design through the validated MC simulations. Funding: Weston Havens Foundation, Office of the Dean of Medical School and Office of the Provost (Stanford University). Loo, Maxim, Borchard, Tantawi are co-founders of TibaRay Inc. Loo and Tantawi are TibaRay Inc. board members. Loo and Maxim received grants from Varian Medical Systems and RaySearch Laboratory.« less

  17. Monte-Carlo simulation of a stochastic differential equation

    NASA Astrophysics Data System (ADS)

    Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG

    2017-12-01

    For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.

  18. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  19. BCA-kMC Hybrid Simulation for Hydrogen and Helium Implantation in Material under Plasma Irradiation

    NASA Astrophysics Data System (ADS)

    Kato, Shuichi; Ito, Atsushi; Sasao, Mamiko; Nakamura, Hiroaki; Wada, Motoi

    2015-09-01

    Ion implantation by plasma irradiation into materials achieves the very high concentration of impurity. The high concentration of impurity causes the deformation and the destruction of the material. This is the peculiar phenomena in the plasma-material interaction (PMI). The injection process of plasma particles are generally simulated by using the binary collision approximation (BCA) and the molecular dynamics (MD), while the diffusion of implanted atoms have been traditionally solved by the diffusion equation, in which the implanted atoms is replaced by the continuous concentration field. However, the diffusion equation has insufficient accuracy in the case of low concentration, and in the case of local high concentration such as the hydrogen blistering and the helium bubble. The above problem is overcome by kinetic Monte Carlo (kMC) which represents the diffusion of the implanted atoms as jumps on interstitial sites in a material. In this paper, we propose the new approach ``BCA-kMC hybrid simulation'' for the hydrogen and helium implantation under the plasma irradiation.

  20. Magnetic Levitation of MC3T3 Osteoblast Cells as a Ground-Based Simulation of Microgravity

    PubMed Central

    Kidder, Louis S.; Williams, Philip C.; Xu, Wayne Wenzhong

    2009-01-01

    Diamagnetic samples placed in a strong magnetic field and a magnetic field gradient experience a magnetic force. Stable magnetic levitation occurs when the magnetic force exactly counter balances the gravitational force. Under this condition, a diamagnetic sample is in a simulated microgravity environment. The purpose of this study is to explore if MC3T3-E1 osteoblastic cells can be grown in magnetically simulated hypo-g and hyper-g environments and determine if gene expression is differentially expressed under these conditions. The murine calvarial osteoblastic cell line, MC3T3-E1, grown on Cytodex-3 beads, were subjected to a net gravitational force of 0, 1 and 2 g in a 17 T superconducting magnet for 2 days. Microarray analysis of these cells indicated that gravitational stress leads to up and down regulation of hundreds of genes. The methodology of sustaining long-term magnetic levitation of biological systems are discussed. PMID:20052306

  1. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  2. TU-AB-BRC-09: Fast Dose-Averaged LET and Biological Dose Calculations for Proton Therapy Using Graphics Cards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, H; Tseung, Chan; Beltran, C

    Purpose: To demonstrate fast and accurate Monte Carlo (MC) calculations of proton dose-averaged linear energy transfer (LETd) and biological dose (BD) on a Graphics Processing Unit (GPU) card. Methods: A previously validated GPU-based MC simulation of proton transport was used to rapidly generate LETd distributions for proton treatment plans. Since this MC handles proton-nuclei interactions on an event-by-event using a Bertini intranuclear cascade-evaporation model, secondary protons were taken into account. The smaller contributions of secondary neutrons and recoil nuclei were ignored. Recent work has shown that LETd values are sensitive to the scoring method. The GPU-based LETd calculations were verifiedmore » by comparing with a TOPAS custom scorer that uses tabulated stopping powers, following recommendations by other authors. Comparisons were made for prostate and head-and-neck patients. A python script is used to convert the MC-generated LETd distributions to BD using a variety of published linear quadratic models, and to export the BD in DICOM format for subsequent evaluation. Results: Very good agreement is obtained between TOPAS and our GPU MC. Given a complex head-and-neck plan with 1 mm voxel spacing, the physical dose, LETd and BD calculations for 10{sup 8} proton histories can be completed in ∼5 minutes using a NVIDIA Titan X card. The rapid turnover means that MC feedback can be obtained on dosimetric plan accuracy as well as BD hotspot locations, particularly in regards to their proximity to critical structures. In our institution the GPU MC-generated dose, LETd and BD maps are used to assess plan quality for all patients undergoing treatment. Conclusion: Fast and accurate MC-based LETd calculations can be performed on the GPU. The resulting BD maps provide valuable feedback during treatment plan review. Partially funded by Varian Medical Systems.« less

  3. Does phenomenological kinetics provide an adequate description of heterogeneous catalytic reactions?

    PubMed

    Temel, Burcin; Meskine, Hakim; Reuter, Karsten; Scheffler, Matthias; Metiu, Horia

    2007-05-28

    Phenomenological kinetics (PK) is widely used in the study of the reaction rates in heterogeneous catalysis, and it is an important aid in reactor design. PK makes simplifying assumptions: It neglects the role of fluctuations, assumes that there is no correlation between the locations of the reactants on the surface, and considers the reacting mixture to be an ideal solution. In this article we test to what extent these assumptions damage the theory. In practice the PK rate equations are used by adjusting the rate constants to fit the results of the experiments. However, there are numerous examples where a mechanism fitted the data and was shown later to be erroneous or where two mutually exclusive mechanisms fitted well the same set of data. Because of this, we compare the PK equations to "computer experiments" that use kinetic Monte Carlo (kMC) simulations. Unlike in real experiments, in kMC the structure of the surface, the reaction mechanism, and the rate constants are known. Therefore, any discrepancy between PK and kMC must be attributed to an intrinsic failure of PK. We find that the results obtained by solving the PK equations and those obtained from kMC, while using the same rate constants and the same reactions, do not agree. Moreover, when we vary the rate constants in the PK model to fit the turnover frequencies produced by kMC, we find that the fit is not adequate and that the rate constants that give the best fit are very different from the rate constants used in kMC. The discrepancy between PK and kMC for the model of CO oxidation used here is surprising since the kMC model contains no lateral interactions that would make the coverage of the reactants spatially inhomogeneous. Nevertheless, such inhomogeneities are created by the interplay between the rate of adsorption, of desorption, and of vacancy creation by the chemical reactions.

  4. Cellular dosimetry calculations for Strontium-90 using Monte Carlo code PENELOPE.

    PubMed

    Hocine, Nora; Farlay, Delphine; Boivin, Georges; Franck, Didier; Agarande, Michelle

    2014-11-01

    To improve risk assessments associated with chronic exposure to Strontium-90 (Sr-90), for both the environment and human health, it is necessary to know the energy distribution in specific cells or tissue. Monte Carlo (MC) simulation codes are extremely useful tools for calculating deposition energy. The present work was focused on the validation of the MC code PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and the assessment of dose distribution to bone marrow cells from punctual Sr-90 source localized within the cortical bone part. S-values (absorbed dose per unit cumulated activity) calculations using Monte Carlo simulations were performed by using PENELOPE and Monte Carlo N-Particle eXtended (MCNPX). Cytoplasm, nucleus, cell surface, mouse femur bone and Sr-90 radiation source were simulated. Cells are assumed to be spherical with the radii of the cell and cell nucleus ranging from 2-10 μm. The Sr-90 source is assumed to be uniformly distributed in cell nucleus, cytoplasm and cell surface. The comparison of S-values calculated with PENELOPE to MCNPX results and the Medical Internal Radiation Dose (MIRD) values agreed very well since the relative deviations were less than 4.5%. The dose distribution to mouse bone marrow cells showed that the cells localized near the cortical part received the maximum dose. The MC code PENELOPE may prove useful for cellular dosimetry involving radiation transport through materials other than water, or for complex distributions of radionuclides and geometries.

  5. Influence of Ice-phase of Hydrometeors on Moist-Convection

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Walker, G. K.

    2003-01-01

    Climate models often ignore the influence of ice-phase physics (IPP) of hydrometeors as a second order effect. This has also been true for McRAS (Microphysics of clouds with Relaxed Arakawa Schubert Scheme) developed by the authors. Recognizing that the temperature sounding is critical for moist-convection, and, that IPP would modify it, we investigated the influence of introducing IPP into McRAS coupled to FvGCM (finite volume General Circulation Model with NCAR physics). We analyzed three 3-yr long simulations; the first called Control Case, CC and had no IPP; the other two called Experiments El and E2 had IPP introduced with two different in-cloud freezing assumptions. Simulation El assumed that all hydrometeors remain liquid in the updraft and freeze upon detrainment. Simulation E2 invoked the in-cloud freezing of new condensate generated at subfreezing temperatures in the updraft while old cloud water continued to ascend as liquid. Upon detrainment, this cloud water also froze like in E1. With these assumptions, about 50% of hydrometeors froze in the tower and the rest froze in the anvil. However, in both El and E2, the frozen hydrometeors melted during fall at the first encounter of above freezing ambient temperature. Comparative analysis revealed that El simulated far more mid-level and far less deep clouds while E2 had modified deep and more mid-level clouds as compared to CC along with some major changes around the melt-level. We infer that IPP produced a more realistic response in E2. At the basic level, the results show that ice-phase processes influence convective detrainment at mid- and deep levels in accord with TOGAGOARE observations. The results suggest that IPP can help to mitigate less-than-observed mid-level and over-abundance of deep convective clouds in McRAS.

  6. Variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} for the small-field dosimetric parameters percentage depth dose, tissue-maximum ratio, and off-axis ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Francescon, Paolo, E-mail: paolo.francescon@ulssvicenza.it; Satariano, Ninfa; Beddar, Sam

    Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}}more » enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD measurements for fields greater than those produced using a 10-mm collimator. However, with the detector stem parallel to the beam axis, the microchambers could be used for TMR measurements for all field sizes. The microchambers could not be used for OAR measurements for small fields. Conclusions: Compared with MC simulation, the Exradin W1 PSD can reproduce the PDDs, TMRs, and OARs in water with a high degree of accuracy; thus, the correction used for converting dose is very close to unity. The stereotactic diode is a viable alternative because it shows an acceptable systematic error in the measurement of PDDs and TMRs and a significant underestimation in only the tail region of the OAR measurements, where the dose is low and differences in dose may not be therapeutically meaningful.« less

  7. Electro-optical rendezvous and docking sensors

    NASA Technical Reports Server (NTRS)

    Tubbs, David J.; Kesler, Lynn O.; Sirko, Robert J.

    1991-01-01

    Electro-optical sensors provide unique and critical functionality for space missions requiring rendezvous, docking, and berthing. McDonnell Douglas is developing a complete rendezvous and docking system for both manned and unmanned missions. This paper examines our sensor development and the systems and missions which benefit from rendezvous and docking sensors. Simulation results quantifying system performance improvements in key areas are given, with associated sensor performance requirements. A brief review of NASA-funded development activities and the current performance of electro-optical sensors for space applications is given. We will also describe current activities at McDonnell Douglas for a fully functional demonstration to address specific NASA mission needs.

  8. Ion-mediated interactions in suspensions of oppositely charged nanoparticles

    NASA Astrophysics Data System (ADS)

    Dahirel, Vincent; Hansen, Jean Pierre

    2009-08-01

    The structure of oppositely charged spherical nanoparticles (polyions), dispersed in ionic solutions with continuous solvent (primitive model), is investigated by Monte Carlo (MC) simulations, within explicit and implicit microion representations, over a range of polyion valences and densities, and microion concentrations. Systems with explicit microions are explored by semigrand canonical MC simulations, and allow density-dependent effective polyion pair potentials vαβeff(r ) to be extracted from measured partial pair distribution functions. Implicit microion MC simulations are based on pair potentials of mean force vαβ(2)(r ) computed by explicit microion simulations of two charged polyions, in the low density limit. In the vicinity of the liquid-gas separation expected for oppositely charged polyions, the implicit microion representation leads to an instability against density fluctuations for polyion valences |Z| significantly below those at which the instability sets in within the exact explicit microion representation. Far from this instability region, the vαβ(2)(r ) are found to be fairly close to but consistently more repulsive than the effective pair potentials vαβeff(r ). This is corroborated by additional calculations of three-body forces between polyion triplets, which are repulsive when one polyion is of opposite charge to the other two. The explicit microion MC data were exploited to determine the ratio of salt concentrations c and co within the dispersion and the reservoir (Donnan effect). c /co is found to first increase before finally decreasing as a function of the polyion packing fraction.

  9. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.

    PubMed

    Martinez-Rovira, I; Sempau, J; Prezado, Y

    2012-05-01

    Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two different media. Optimization of the simulation parameters and the use of VR techniques saved a significant amount of computation time. Finally, parallelization of the simulations improved even further the calculation time, which reached 1 day for a typical irradiation case envisaged in the forthcoming clinical trials in MRT. An example of MRT treatment in a dog's head is presented, showing the performance of the calculation engine. The development of the first MC-based calculation engine for the future TPS devoted to MRT has been accomplished. This will constitute an essential tool for the future clinical trials on pets at the ESRF. The MC engine is able to calculate dose distributions in micrometer-sized bins in complex voxelized CT structures in a reasonable amount of time. Minimization of the computation time by using several approaches has led to timings that are adequate for pet radiotherapy at synchrotron facilities. The next step will consist in its integration into a user-friendly graphical front-end.

  10. Simulation - McCandless, Bruce (Syncom IV)

    NASA Image and Video Library

    1985-04-15

    S85-30800 (14 April 1985) --- Astronaut Bruce McCandless II tests one of the possible methods of attempting to activate a switch on the Syncom-IV (LEASAT) satellite released April 13 into space from the Space Shuttle Discovery. The communications spacecraft failed to behave properly upon release and NASA officials and satellite experts are considering possible means of repair. McCandless was using a full scale mockup of the satellite in the Johnson Space Center's (JSC) mockup and integration laboratory.

  11. Monte Carlo simulation of prompt γ-ray emission in proton therapy using a specific track length estimator

    NASA Astrophysics Data System (ADS)

    El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.

    2015-10-01

    A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.

  12. The specific purpose Monte Carlo code McENL for simulating the response of epithermal neutron lifetime well logging tools

    NASA Astrophysics Data System (ADS)

    Prettyman, T. H.; Gardner, R. P.; Verghese, K.

    1993-08-01

    A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.

  13. Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.

    PubMed Central

    Mathiowetz, A. M.; Goddard, W. A.

    1995-01-01

    Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885

  14. Evaluation of backscatter dose from internal lead shielding in clinical electron beams using EGSnrc Monte Carlo simulations.

    PubMed

    De Vries, Rowen J; Marsh, Steven

    2015-11-08

    Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2-14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997 ± 0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs.

  15. Learning and evolution in bacterial taxis: an operational amplifier circuit modeling the computational dynamics of the prokaryotic 'two component system' protein network.

    PubMed

    Di Paola, Vieri; Marijuán, Pedro C; Lahoz-Beltra, Rafael

    2004-01-01

    Adaptive behavior in unicellular organisms (i.e., bacteria) depends on highly organized networks of proteins governing purposefully the myriad of molecular processes occurring within the cellular system. For instance, bacteria are able to explore the environment within which they develop by utilizing the motility of their flagellar system as well as a sophisticated biochemical navigation system that samples the environmental conditions surrounding the cell, searching for nutrients or moving away from toxic substances or dangerous physical conditions. In this paper we discuss how proteins of the intervening signal transduction network could be modeled as artificial neurons, simulating the dynamical aspects of the bacterial taxis. The model is based on the assumption that, in some important aspects, proteins can be considered as processing elements or McCulloch-Pitts artificial neurons that transfer and process information from the bacterium's membrane surface to the flagellar motor. This simulation of bacterial taxis has been carried out on a hardware realization of a McCulloch-Pitts artificial neuron using an operational amplifier. Based on the behavior of the operational amplifier we produce a model of the interaction between CheY and FliM, elements of the prokaryotic two component system controlling chemotaxis, as well as a simulation of learning and evolution processes in bacterial taxis. On the one side, our simulation results indicate that, computationally, these protein 'switches' are similar to McCulloch-Pitts artificial neurons, suggesting a bridge between evolution and learning in dynamical systems at cellular and molecular levels and the evolutive hardware approach. On the other side, important protein 'tactilizing' properties are not tapped by the model, and this suggests further complexity steps to explore in the approach to biological molecular computing.

  16. Evaluation of backscatter dose from internal lead shielding in clinical electron beams using EGSnrc Monte Carlo simulations

    PubMed Central

    Marsh, Steven

    2015-01-01

    Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2 mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2–14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997±0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs. PACS numbers: 87.53.Bn, 87.55.K‐, 87.56.bd PMID:26699566

  17. Investigation of the evolution of atmospheric particles with integration of the stochastic particle-resolved model partmc-mosaic and atmospheric measurements

    NASA Astrophysics Data System (ADS)

    Tian, Jian

    With the recently-developed particle-resolved model PartMC-MOSAIC, the mixing state and other physico-chemical properties of individual aerosol particles can be tracked as the particles undergo aerosol aging processes. However, existing PartMC-MOSAIC applications have mainly been based on idealized scenarios, and a link to real atmospheric measurement has not yet been established. In this thesis, we extend the capability of PartMC-MOSAIC and apply the model framework to three distinct scenarios with different environmental conditions to investigate the physical and chemical aging of aerosols in those environments. The first study is to investigate the evolution of particle mixing state and cloud condensation nuclei (CCN) activation properties in a ship plume. Comparisons of our results with observations from the QUANTIFY Study in 2007 in the English channel and the Gulf of Biscay showed that the model was able to reproduce the observed evolution of total number concentration and the vanishing of the nucleation mode consisting of sulfate particles. Further process analysis revealed that during the first hour after emission, dilution reduced the total number concentration by four orders of magnitude, while coagulation reduced it by an additional order of magnitude. Neglecting coagulation resulted in an overprediction of more than one order of magnitude in the number concentration of particles smaller than 40 nm at a plume age of 100 s. Coagulation also significantly altered the mixing state of the particles, leading to a continuum of internal mixtures of sulfate and black carbon. The impact of condensation on CCN concentrations depended on the supersaturation threshold at which CCN activity was evaluated. Nucleation was observed to have a limited impact on the CCN concentration in the ship plume we studied, but was sensitive to formation rates of secondary aerosol. For the second study we adapted PartMC to represent the aerosol evolution in an aerosol chamber, with the intention to use the model as a tool to interpret and guide chamber experiments in the future. We added chamber-specific processes to our model formulation such as wall loss due to particle diffusion and sedimentation, and dilution effects due to sampling. We also implemented a treatment of fractal particles to account for the morphology of agglomerates and its impact on aerosol dynamics. We verified the model with published results of self-similar size distributions, and validated the model using experimental data from an aerosol chamber. To this end we developed a fitting optimization approach to determine the best-estimate values for the wall loss parameters based on minimizing the l2-norm of the model errors of the number distribution. Obtaining the best fit required taking into account the non-spherical structure of the particle agglomerates. Our third study focuses on the implementation of volatility basis set (VBS) framework in PartMC-MOSAIC to investigate the chemical aging of organic aerosols in the atmosphere. The updated PartMC-MOSAIC model framework was used to simulate the evolution of aerosols in air trajectories initialized from CARES field campaign conducted in California in June 2010. The simulation results were compared with aircraft measurement data during the campaign. PartMC-MOSAIC was able to produce gas and aerosol concentrations at similar levels compared to the observation data. Moreover, the simulation with VBS enabled produced consistently more secondary organic aerosols (SOA). The investigation of particle mixing state revealed that the impact of VBS framework on particle mixing state is sensitive to the daylight exposure time. (Abstract shortened by ProQuest.).

  18. SU-E-T-310: Micro-Dosimetry Study of the Radiation Dose Enhancement at the Gold-Tissue Interface for Nanoparticle-Aided Radiation Therapy.

    PubMed

    Paudel, N; Shvydka, D; Parsai, E

    2012-06-01

    Gold nanoparticles (AuNP) have been proposed to be utilized for local dose enhancement in radiation therapy. Due to a very sharp spatial fall-off of the effect, the dosimetry associated with such an approach is difficult to implement in a direct measurement. This study is aimed at establishing a micro-dosimetry technique for experimental verification of dose enhancement in the vicinity of gold-tissue interface. The spatial distribution of the dose enhancement near the gold-tissue interface is modeled with Monte Carlo (MC) package MCNP5 in a 1-dimentional approach of a thin gold slab placed in an ICRU-4 component tissue phantom. The model is replicating the experiment, where the dose enhancement due to gold foils having thicknesses of 1, 10, and 100μm and areas of 12.5×25mm 2 are placed at a short distance from clinical HDR brachytherapy (Ir-192) source. The measurements are carried out with a thin-film CdTe-based photodetector, having thickness <10μm, allowing for high spatial resolution at progressively increasing distances from the foil. Our MC simulation results indicate that for Ir-192 energy spectrum the dose enhancement region extends over ∼1 mm distance from the foil, changing from several hundred at the interface to just a few percent. The trend in the measured dose enhancement closely follows the results obtained from MC simulations. AuNP's have been established as promising candidates for dose enhancement in nanoparticle-aided radiation therapy, particularly, in the energy range relevant to brachytherapy applications. Most researchers study the dose enhancement with MC simulations, or experimental approaches involving biological systems, where achievable dose enhancements are difficult to quantify. Successful development of micro-dosimetry approaches will pave a way for direct assessment of the dose in experiments on biological models, shedding some light on apparent discrepancy between physical dose enhancement and biological effect established in studies of AuNP-aided radiation therapy. No conflict of interest. © 2012 American Association of Physicists in Medicine.

  19. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.

  20. Free-energy landscape of intrinsically disordered proteins investigated by all-atom multicanonical molecular dynamics.

    PubMed

    Higo, Junichi; Umezawa, Koji

    2014-01-01

    We introduce computational studies on intrinsically disordered proteins (IDPs). Especially, we present our multicanonical molecular dynamics (McMD) simulations of two IDP-partner systems: NRSF-mSin3 and pKID-KIX. McMD is one of enhanced conformational sampling methods useful for conformational sampling of biomolecular systems. IDP adopts a specific tertiary structure upon binding to its partner molecule, although it is unstructured in the unbound state (i.e. the free state). This IDP-specific property is called "coupled folding and binding". The McMD simulation treats the biomolecules with an all-atom model immersed in an explicit solvent. In the initial configuration of simulation, IDP and its partner molecules are set to be distant from each other, and the IDP conformation is disordered. The computationally obtained free-energy landscape for coupled folding and binding has shown that native- and non-native-complex clusters distribute complicatedly in the conformational space. The all-atom simulation suggests that both of induced-folding and population-selection are coupled complicatedly in the coupled folding and binding. Further analyses have exemplified that the conformational fluctuations (dynamical flexibility) in the bound and unbound states are essentially important to characterize IDP functioning.

  1. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  2. Competing growth processes induced by next-nearest-neighbor interactions: Effects on meandering wavelength and stiffness

    NASA Astrophysics Data System (ADS)

    Blel, Sonia; Hamouda, Ajmi BH.; Mahjoub, B.; Einstein, T. L.

    2017-02-01

    In this paper we explore the meandering instability of vicinal steps with a kinetic Monte Carlo simulations (kMC) model including the attractive next-nearest-neighbor (NNN) interactions. kMC simulations show that increase of the NNN interaction strength leads to considerable reduction of the meandering wavelength and to weaker dependence of the wavelength on the deposition rate F. The dependences of the meandering wavelength on the temperature and the deposition rate obtained with simulations are in good quantitative agreement with the experimental result on the meandering instability of Cu(0 2 24) [T. Maroutian et al., Phys. Rev. B 64, 165401 (2001), 10.1103/PhysRevB.64.165401]. The effective step stiffness is found to depend not only on the strength of NNN interactions and the Ehrlich-Schwoebel barrier, but also on F. We argue that attractive NNN interactions intensify the incorporation of adatoms at step edges and enhance step roughening. Competition between NNN and nearest-neighbor interactions results in an alternative form of meandering instability which we call "roughening-limited" growth, rather than attachment-detachment-limited growth that governs the Bales-Zangwill instability. The computed effective wavelength and the effective stiffness behave as λeff˜F-q and β˜eff˜F-p , respectively, with q ≈p /2 .

  3. The effect of dose enhancement near metal interfaces on synthetic diamond based X-ray dosimeters

    NASA Astrophysics Data System (ADS)

    Alamoudi, D.; Lohstroh, A.; Albarakaty, H.

    2017-11-01

    This study investigates the effects of dose enhancement on the photocurrent performance at metallic interfaces in synthetic diamond detectors based X-ray dosimeters as a function of bias voltages. Monte Carlo (MC) simulations with the BEAMnrc code were carried out to simulate the dose enhancement factor (DEF) and compared against the equivalent photocurrent ratio from experimental investigations. The MC simulation results show that the sensitive region for the absorbed dose distribution covers a few micrometers distances from the interface. Experimentally, two single crystals (SC) and one polycrystalline (PC) synthetic diamond samples were fabricated into detectors with carbon based electrodes by boron and carbon ion implantation. Subsequently; the samples were each mounted inside a tissue equivalent encapsulation to minimize unintended fluence perturbation. Dose enhancement was generated by placing copper, lead or gold near the active volume of the detectors using 50 kVp and 100 kVp X-rays relevant for medical dosimetry. The results show enhancement in the detectors' photocurrent performance when different metals are butted up to the diamond bulk as expected. The variation in the photocurrent measurement depends on the type of diamond samples, their electrodes' fabrication and the applied bias voltages indicating that the dose enhancement near the detector may modify their electronic performance.

  4. Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.

    2016-01-01

    This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.

  5. Fire and reduced vigor facilitate vegetation shifts: MC2 results for the conterminous US with CMIP5 climate futures

    NASA Astrophysics Data System (ADS)

    Bachelet, D. M.; Ferschweiler, K.; Baker, B.; Sleeter, B. M.

    2016-12-01

    Climate variability and a warming trend during the 21st century ensures fuel build-up and episodic catastrophic wildfires. We used downscaled (2.5 arcmin) CMIP5 climate futures from 20 models under RCP 8.5 to run the dynamic global vegetation model MC2 over the conterminous US and identify key drivers of land cover change. We show regional and temporal differences in the magnitude of projected C losses due to fire over the 21st century. We also look at the vigor (NPP/LAI) of forest lands and estimate the loss in C capture due to declines in production as well as the increase in heterotrophic respiration due to increased mortality. We compare simulated the carbon sequestration potential of terrestrial biomes and the risk of carbon losses through disturbance. We quantify uncertainty in model results by showing the distribution of possible future impacts under 20 futures. We explore the effects of land use and highlight the challenges we met to simulate credible transient management practices throughout the 20th century and into the future.

  6. Quantitative analysis of optical properties of flowing blood using a photon-cell interactive Monte Carlo code: effects of red blood cells' orientation on light scattering.

    PubMed

    Sakota, Daisuke; Takatani, Setsuo

    2012-05-01

    Optical properties of flowing blood were analyzed using a photon-cell interactive Monte Carlo (pciMC) model with the physical properties of the flowing red blood cells (RBCs) such as cell size, shape, refractive index, distribution, and orientation as the parameters. The scattering of light by flowing blood at the He-Ne laser wavelength of 632.8 nm was significantly affected by the shear rate. The light was scattered more in the direction of flow as the flow rate increased. Therefore, the light intensity transmitted forward in the direction perpendicular to flow axis decreased. The pciMC model can duplicate the changes in the photon propagation due to moving RBCs with various orientations. The resulting RBC's orientation that best simulated the experimental results was with their long axis perpendicular to the direction of blood flow. Moreover, the scattering probability was dependent on the orientation of the RBCs. Finally, the pciMC code was used to predict the hematocrit of flowing blood with accuracy of approximately 1.0 HCT%. The photon-cell interactive Monte Carlo (pciMC) model can provide optical properties of flowing blood and will facilitate the development of the non-invasive monitoring of blood in extra corporeal circulatory systems.

  7. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z

    2015-06-15

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less

  8. The trade-off between morphology and control in the co-optimized design of robots.

    PubMed

    Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.

  9. Assessment of mean-field microkinetic models for CO methanation on stepped metal surfaces using accelerated kinetic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Andersen, Mie; Plaisance, Craig P.; Reuter, Karsten

    2017-10-01

    First-principles screening studies aimed at predicting the catalytic activity of transition metal (TM) catalysts have traditionally been based on mean-field (MF) microkinetic models, which neglect the effect of spatial correlations in the adsorbate layer. Here we critically assess the accuracy of such models for the specific case of CO methanation over stepped metals by comparing to spatially resolved kinetic Monte Carlo (kMC) simulations. We find that the typical low diffusion barriers offered by metal surfaces can be significantly increased at step sites, which results in persisting correlations in the adsorbate layer. As a consequence, MF models may overestimate the catalytic activity of TM catalysts by several orders of magnitude. The potential higher accuracy of kMC models comes at a higher computational cost, which can be especially challenging for surface reactions on metals due to a large disparity in the time scales of different processes. In order to overcome this issue, we implement and test a recently developed algorithm for achieving temporal acceleration of kMC simulations. While the algorithm overall performs quite well, we identify some challenging cases which may lead to a breakdown of acceleration algorithms and discuss possible directions for future algorithm development.

  10. The trade-off between morphology and control in the co-optimized design of robots

    PubMed Central

    Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482

  11. Using self-organizing maps to identify potential halo white dwarfs.

    PubMed

    García-Berro, Enrique; Torres, Santiago; Isern, Jordi

    2003-01-01

    We present the results of an unsupervised classification of the disk and halo white dwarf populations in the solar neighborhood. The classification is done by merging the results of detailed Monte Carlo (MC) simulations, which reproduce very well the characteristics of the white dwarf populations in the solar neighborhood, with a catalogue of real stars. The resulting composite catalogue is analyzed using a competitive learning algorithm. In particular we have used the so-called self-organized map. The MC simulated stars are used as tracers and help in identifying the resulting clusters. The results of such an strategy turn out to be quite satisfactory, suggesting that this approach can provide an useful framework for analyzing large databases of white dwarfs with well determined kinematical, spatial and photometric properties once they become available in the next decade. Moreover, the results are of astrophysical interest as well, since a straightforward interpretation of several recent astronomical observations, like the detected microlensing events in the direction of the Magellanic Clouds, the possible detection of high proper motion white dwarfs in the Hubble Deep Field and the discovery of high velocity white dwarfs in the solar neighborhood, suggests that a fraction of the baryonic dark matter component of our galaxy could be in the form of old and dim halo white dwarfs.

  12. SU-E-T-25: Real Time Simulator for Designing Electron Dual Scattering Foil Systems.

    PubMed

    Carver, R; Hogstrom, K; Price, M; Leblanc, J; Harris, G

    2012-06-01

    To create a user friendly, accurate, real time computer simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator should allow for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator consists of an analytical algorithm for calculating electron fluence and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with a refined Moliere formalism for scattering powers. The simulator also estimates central-axis x-ray dose contamination from the dual foil system. Once the geometry of the beamline is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scattering foil material and Gaussian shape (thickness and sigma), and beam energy. The beam profile and x-ray contamination are displayed in real time. The simulator was tuned by comparison of off-axis electron fluence profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV and using present foils on the Elekta radiotherapy accelerator, the simulator profiles agreed to within 2% of MC profiles from within 20 cm of the central axis. The x-ray contamination predictions matched measured data to within 0.6%. The calculation time was approximately 100 ms using a single processor, which allows for real-time variation of foil parameters using sliding bars. A real time dual scattering foil system simulator has been developed. The tool has been useful in a project to redesign an electron dual scattering foil system for one of our radiotherapy accelerators. The simulator has also been useful as an instructional tool for our medical physics graduate students. © 2012 American Association of Physicists in Medicine.

  13. Formalization and Validation of an SADT Specification Through Executable Simulation in VHDL

    DTIC Science & Technology

    1991-12-01

    be found in (39, 40, 41). One recent summary of the SADT methodology was written by Marca and McGowan in 1988 (.32). SADT is a methodology to provide...that is required. Also, the presence of "all" inputs and controls may not be needed for the activity to proceed. Marca and McGowan (32) describe a...diagrams which describe a complete system. Marca and McGowan define an SADT Model as: "a collection of carefully coorinated descriptions, starting from a

  14. Astronaut William McArthur prepares for a training exercise

    NASA Image and Video Library

    1993-07-20

    S93-38686 (20 July 1993) --- Wearing a training version of the partial pressure launch and entry garment, astronaut William S. McArthur prepares to rehearse emergency egress procedures for the STS-58 mission. McArthur, along with the five other NASA astronauts and a visiting payload specialist assigned to the seven-member crew, later simulated contingency evacuation procedures. Most of the training session took place in the crew compartment and full fuselage trainers of the Space Shuttle mockup and integration laboratory.

  15. Performance Enhancement of MC-CDMA System through Novel Sensitive Bit Algorithm Aided Turbo Multi User Detection

    PubMed Central

    Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan

    2015-01-01

    Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917

  16. Enhanced Sampling of an Atomic Model with Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulations Guided by a Coarse-Grained Model.

    PubMed

    Chen, Yunjie; Roux, Benoît

    2015-08-11

    Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water.

  17. Enhanced Sampling of an Atomic Model with Hybrid Nonequilibrium Molecular Dynamics—Monte Carlo Simulations Guided by a Coarse-Grained Model

    PubMed Central

    2015-01-01

    Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water. PMID:26574442

  18. The effect of voxel size on dose distribution in Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Yani, Sitti; Dirgayussa, I. Gde E.; Rhani, Moh. Fadhillah; Haryanto, Freddy; Arif, Idam

    2015-09-01

    Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm3, 1 × 1 × 0.5 cm3, and 1 × 1 × 0.8 cm3. The 1 × 109 histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in dmax from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm3 about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm3 about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.

  19. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  20. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  1. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  2. Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.

  3. PHITS simulations of absorbed dose out-of-field and neutron energy spectra for ELEKTA SL25 medical linear accelerator.

    PubMed

    Puchalska, Monika; Sihver, Lembit

    2015-06-21

    Monte Carlo (MC) based calculation methods for modeling photon and particle transport, have several potential applications in radiotherapy. An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. It is also essential to minimize the dose to radiosensitive and critical organs. With MC technique, the dose distributions from both the primary and scattered photons can be calculated. The out-of-field radiation doses are of particular concern when high energy photons are used, since then neutrons are produced both in the accelerator head and inside the patients. Using MC technique, the created photons and particles can be followed and the transport and energy deposition in all the tissues of the patient can be estimated. This is of great importance during pediatric treatments when minimizing the risk for normal healthy tissue, e.g. secondary cancer. The purpose of this work was to evaluate 3D general purpose PHITS MC code efficiency as an alternative approach for photon beam specification. In this study, we developed a model of an ELEKTA SL25 accelerator and used the transport code PHITS for calculating the total absorbed dose and the neutron energy spectra infield and outside the treatment field. This model was validated against measurements performed with bubble detector spectrometers and Boner sphere for 18 MV linacs, including both photons and neutrons. The average absolute difference between the calculated and measured absorbed dose for the out-of-field region was around 11%. Taking into account a simplification for simulated geometry, which does not include any potential scattering materials around, the obtained result is very satisfactorily. A good agreement between the simulated and measured neutron energy spectra was observed while comparing to data found in the literature.

  4. PHITS simulations of absorbed dose out-of-field and neutron energy spectra for ELEKTA SL25 medical linear accelerator

    NASA Astrophysics Data System (ADS)

    Puchalska, Monika; Sihver, Lembit

    2015-06-01

    Monte Carlo (MC) based calculation methods for modeling photon and particle transport, have several potential applications in radiotherapy. An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. It is also essential to minimize the dose to radiosensitive and critical organs. With MC technique, the dose distributions from both the primary and scattered photons can be calculated. The out-of-field radiation doses are of particular concern when high energy photons are used, since then neutrons are produced both in the accelerator head and inside the patients. Using MC technique, the created photons and particles can be followed and the transport and energy deposition in all the tissues of the patient can be estimated. This is of great importance during pediatric treatments when minimizing the risk for normal healthy tissue, e.g. secondary cancer. The purpose of this work was to evaluate 3D general purpose PHITS MC code efficiency as an alternative approach for photon beam specification. In this study, we developed a model of an ELEKTA SL25 accelerator and used the transport code PHITS for calculating the total absorbed dose and the neutron energy spectra infield and outside the treatment field. This model was validated against measurements performed with bubble detector spectrometers and Boner sphere for 18 MV linacs, including both photons and neutrons. The average absolute difference between the calculated and measured absorbed dose for the out-of-field region was around 11%. Taking into account a simplification for simulated geometry, which does not include any potential scattering materials around, the obtained result is very satisfactorily. A good agreement between the simulated and measured neutron energy spectra was observed while comparing to data found in the literature.

  5. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  6. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  7. STS-31 MS McCandless and MS Sullivan during JSC WETF underwater simulation

    NASA Image and Video Library

    1990-03-05

    This overall view shows STS-31 Mission Specialist (MS) Bruce McCandless II (left) and MS Kathryn D. Sullivan making a practice space walk in JSC's Weightless Environment Training Facility (WETF) Bldg 29 pool. McCandless works with a mockup of the remote manipulator system (RMS) end effector which is attached to a grapple fixture on the Hubble Space Telescope (HST) mockup. Sullivan manipulates HST hardware on the Support System Module (SSM) forward shell. SCUBA-equipped divers monitor the extravehicular mobility unit (EMU) suited crewmembers during this simulated extravehicular activity (EVA). No EVA is planned for the Hubble Space Telescope (HST) deployment, but the duo has trained for contingencies which might arise during the STS-31 mission aboard Discovery, Orbiter Vehicle (OV) 103. Photo taken by NASA JSC photographer Sheri Dunnette.

  8. STS-31 MS McCandless and MS Sullivan during JSC WETF underwater simulation

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This overall view shows STS-31 Mission Specialist (MS) Bruce McCandless II (left) and MS Kathryn D. Sullivan making a practice space walk in JSC's Weightless Environment Training Facility (WETF) Bldg 29 pool. McCandless works with a mockup of the remote manipulator system (RMS) end effector which is attached to a grapple fixture on the Hubble Space Telescope (HST) mockup. Sullivan manipulates HST hardware on the Support System Module (SSM) forward shell. SCUBA-equipped divers monitor the extravehicular mobility unit (EMU) suited crewmembers during this simulated extravehicular activity (EVA). No EVA is planned for the Hubble Space Telescope (HST) deployment, but the duo has trained for contingencies which might arise during the STS-31 mission aboard Discovery, Orbiter Vehicle (OV) 103. Photo taken by NASA JSC photographer Sheri Dunnette.

  9. Two-phase nc-TiN/a-(C,CN{sub x}) nanocomposite films: A HRTEM and MC simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, J.; Lu, Y. H.; Hu, X. J.

    2013-06-18

    The grain growth in two-phase nanocomposite Ti-C{sub x}-N{sub y} thin films grown by reactive close-field unbalanced magnetron sputtering in an Ar-N{sub 2} gas mixture with microstructures comprising of nanocrystalline (nc-) Ti(N,C) phase surrounded by amorphous (a-) (C,CN{sub x}) phase was investigated by a combination of high-resolution transmission electron microscopy (HRTEM) and Monte Carlo (MC) simulations. The HRTEM results revealed that amorphous-free solid solution Ti(C,N) thin films exhibited polycrystallites with different sizes, orientations and irregular shapes. The grain size varied in the range between several nanometers and several decade nanometers. Further increase of C content (up to {approx}19 at.% C) mademore » the amorphous phase wet nanocrystallites, which strongly hindered the growth of nanocrystallites. As a result, more regular Ti(C,N) nanocrystallites with an average size of {approx}5 nm were found to be separated by {approx}0.5-nm amorphous phases. When C content was further increased (up to {approx}48 at.% in this study), thicker amorphous matrices were produced and followed by the formation of smaller sized grains with lognormal distribution. Our MC analysis indicated that with increasing amorphous volume fraction (i.e. increasing C content), the transformation from nc/nc grain boundary (GB)-curvature-driven growth to a/nc GB-curvature-driven growth is directly responsible for the observed grain growth from great inhomogeneity to homogeneity process.« less

  10. Airborne and ground based measurements in McMurdo Sound, Antarctica, for the validation of satellite derived ice thickness

    NASA Astrophysics Data System (ADS)

    Rack, Wolfgang; Haas, Christian; Langhorne, Pat; Leonard, Greg; Price, Dan; Barnsdale, Kelvin; Soltanzadeh, Iman

    2014-05-01

    Melting and freezing processes in the ice shelf cavities of the Ross and McMurdo Ice Shelves significantly influence the sea ice formation in McMurdo Sound. Between 2009 and 2013 we used a helicopter-borne laser and electromagnetic induction sounder (EM bird) to measure thickness and freeboard profiles across the ice shelf and the landfast sea ice, which was accompanied by extensive field validation, and coordinated with satellite altimeter overpasses. Using freeboard and thickness, the bulk density of all ice types was calculated assuming hydrostatic equilibrium. Significant density steps were detected between first-year and multi-year sea ice, with higher values for the younger sea ice. Values are overestimated in areas with abundance of sub-ice platelets because of overestimation in both ice thickness and freeboard. On the ice shelf, bulk ice densities were sometimes higher than that of pure ice, which can be explained by both the accretion of marine ice and glacial sediments. For thin ice, the freeboard to thickness conversion critically depends on the knowledge of snow properties. Our measurements allow tuning and validation of snow cover simulations using the Weather Research Forecasting (WRF) model. The simulated snowcover is used to calculate ice thickness from satellite derived freeboard. The results of our measurements, which are supported by the New Zealand Antarctic programme, draw a picture of how oceanographic processes influence the ice shelf morphology and sea ice formation in McMurdo Sound, and how satellite derived freeboard of ICESat and CryoSat together with information on snow cover can potentially capture the signature of these processes.

  11. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  12. Advantage of the modified Lunn-McNeil technique over Kalbfleisch-Prentice technique in competing risks

    NASA Astrophysics Data System (ADS)

    Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.

    2002-03-01

    Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.

  13. Simulation of gas adsorption on a surface and in slit pores with grand canonical and canonical kinetic Monte Carlo methods.

    PubMed

    Ustinov, E A; Do, D D

    2012-08-21

    We present for the first time in the literature a new scheme of kinetic Monte Carlo method applied on a grand canonical ensemble, which we call hereafter GC-kMC. It was shown recently that the kinetic Monte Carlo (kMC) scheme is a very effective tool for the analysis of equilibrium systems. It had been applied in a canonical ensemble to describe vapor-liquid equilibrium of argon over a wide range of temperatures, gas adsorption on a graphite open surface and in graphitic slit pores. However, in spite of the conformity of canonical and grand canonical ensembles, the latter is more relevant in the correct description of open systems; for example, the hysteresis loop observed in adsorption of gases in pores under sub-critical conditions can only be described with a grand canonical ensemble. Therefore, the present paper is aimed at an extension of the kMC to open systems. The developed GC-kMC was proved to be consistent with the results obtained with the canonical kMC (C-kMC) for argon adsorption on a graphite surface at 77 K and in graphitic slit pores at 87.3 K. We showed that in slit micropores the hexagonal packing in the layers adjacent to the pore walls is observed at high loadings even at temperatures above the triple point of the bulk phase. The potential and applicability of the GC-kMC are further shown with the correct description of the heat of adsorption and the pressure tensor of the adsorbed phase.

  14. Evaluation of the medical and occupational shielding in cerebral angiography using Monte Carlo simulations and virtual anthropomorphic phantoms

    NASA Astrophysics Data System (ADS)

    Santos, William S.; Neves, Lucio P.; Perini, Ana P.; Caldas, Linda V. E.; Maia, Ana F.

    2015-12-01

    Cerebral angiography exams may provide valuable diagnostic information for the patients with suspect of cerebral diseases, but it may also deliver high doses of radiation to the patients and medical staff. In order to evaluate the medical and occupational expositions from different irradiation conditions, Monte Carlo (MC) simulations were employed. Virtual anthropomorphic phantoms (MASH) were used to represent the patient and the physician inside a typical fluoroscopy room, also simulated in details, incorporated in the MCNPX 2.7.0 MC code. The evaluation was carried out by means of dose conversion coefficients (CCs) for equivalent (H) and effective (E) doses normalized by the air kerma-area product (KAP). The CCs for the surface entrance dose of the patient (ESD) and equivalent dose for the eyes of the medical staff were determined, because CA exams present higher risks for those organs. The tube voltage was 80 kVp, and Al filters with thicknesses of 2.5 mm, 3.5 mm and 4.0 mm were positioned in the beams. Two projections were simulated: posterior-anterior (PA) and right-lateral (RLAT). In all situations there was an increase of the CC values with the increase of the Al filtration. The highest dose was obtained for a RLAT projection with a 4.0 mm Al filter. In this projection, the ESD/KAP and E/KAP values to patient were 11 (14%) mGy/Gy cm2 and 0.12 (0.1%) mSv/Gy cm2, respectively. For the physician, the use of shield lead glass suspended and lead curtain attached to the surgical table resulted in a significant reduction of the CCs. The use of MC simulations proved to be a very important tool in radiation protection dosimetry, and specifically in this study several parameters could be evaluated, which would not be possible experimentally.

  15. Infrared spectroscopy and molecular simulations of a polymeric sorbent and its enantioselective interactions with benzoin enantiomers.

    PubMed

    Tsui, Hung-Wei; Willing, Jonathan N; Kasat, Rahul B; Wang, Nien-Hwa Linda; Franses, Elias I

    2011-11-10

    Retention factors, k(R) and k(S), and enantioselectivities, S ≡ k(R)/k(S), of amylose tris[(S)-α-methylbenzylcarbamate] (AS) sorbent for benzoin (B) enantiomers were measured for various isopropyl alcohol (IPA)/n-hexane compositions of the high-performance liquid chromatography (HPLC) mobile phase. Novel data for pure n-hexane show that k(R) = 106, k(S) = 49.6, and S = 2.13. With some IPA from 0.5 to 10 vol %, with S = 1.8-1.4, the retention factors were smaller. Infrared spectra showed evidence of substantial hydrogen bonding (H-bonding) interactions in the pure polymer phase and additional H-bonding interactions between AS and benzoin. Density functional theory (DFT) was used to model the chain-chain and chain-benzoin H-bonding and other interactions. DFT was also used to predict fairly well the IR wavenumber shifts caused by the H-bonds. DFT simulations of IR bands of NH and C═O allowed for the first time the predictions of relative intensities and relative populations of H-bonding strengths. Molecular dynamics (MD) simulations were used to model a single 12-mer polymer chain. MD simulations predicted the existence of various potentially enantioselective cavities, two of which are sufficiently large to accommodate a benzoin molecule. Then "docking" studies of benzoin in AS with MD, Monte Carlo (MC), and MC/MD simulations were done to probe the AS-B interactions. The observed enantioselectivities are predicted to be primarily due to two H-bonds, of the kind AS CO···HO (R)-benzoin and AS NH···OC (R)-benzoin, and two π-π (phenyl-phenyl) interactions for (R)-benzoin and one H-bond, of type AS CO···HO (S)-benzoin, and one π-π interaction for (S)-benzoin. The MC/MD predictions are consistent with the HPLC and IR results.

  16. HF band filter bank multi-carrier spread spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz

    Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR.more » Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.« less

  17. SU-F-I-32: Organ Doses from Pediatric Head CT Scan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Liu, Q; Qiu, J

    Purpose: To evaluate the organ doses of pediatric patients who undergoing head CT scan using Monte Carlo (MC) simulation and compare it with measurements in anthropomorphic child phantom.. Methods: A ten years old children voxel phantom was developed from CT images, the voxel size of the phantom was 2mm*2mm*2mm. Organ doses from head CT scan were simulated using MCNPX software, 180 detectors were placed in the voxel phantom to tally the doses of the represented tissues or organs. When performing the simulation, 120 kVp and 88 mA were selected as the scan parameters. The scan range covered from the topmore » of the head to the end of the chain, this protocol was used at CT simulator for radiotherapy. To validate the simulated results, organ doses were measured with radiophotoluminescence (RPL) detectors, placed in the 28 organs of the 10 years old CIRS ATOM phantom. Results: The organ doses results matched well between MC simulation and phantom measurements. The eyes dose was showed to be as expected the highest organ dose: 28.11 mGy by simulation and 27.34 mGy by measurement respectively. Doses for organs not included in the scan volume were much lower than those included in the scan volume, thymus doses were observed more than 10 mGy due the CT protocol for radiotherapy covered more body part than routine head CT scan. Conclusion: As the eyes are superficial organs, they may receive the highest radiation dose during the CT scan. Considering the relatively high radio sensitivity, using shielding material or organ based tube current modulation technique should be encouraged to reduce the eye radiation risks. Scan range was one of the most important factors that affects the organ doses during the CT scan. Use as short as reasonably possible scan range should be helpful to reduce the patient radiation dose. This work was supported by the National Natural Science Foundation of China(11475047)« less

  18. Improving dynamic global vegetation model (DGVM) simulation of western U.S. rangelands vegetation seasonal phenology and productivity

    NASA Astrophysics Data System (ADS)

    Kerns, B. K.; Kim, J. B.; Day, M. A.; Pitts, B.; Drapek, R. J.

    2017-12-01

    Ecosystem process models are increasingly being used in regional assessments to explore potential changes in future vegetation and NPP due to climate change. We use the dynamic global vegetation model MAPSS-Century 2 (MC2) as one line of evidence for regional climate change vulnerability assessments for the US Forest Service, focusing our fine tuning model calibration from observational sources related to forest vegetation. However, there is much interest in understanding projected changes for arid rangelands in the western US such as grasslands, shrublands, and woodlands. Rangelands provide many ecosystem service benefits and local rural human community sustainability, habitat for threatened and endangered species, and are threatened by annual grass invasion. Past work suggested MC2 performance related to arid rangeland plant functional types (PFT's) was poor, and the model has difficulty distinguishing annual versus perennial grasslands. Our objectives are to increase the model performance for rangeland simulations and explore the potential for splitting the grass plant functional type into annual and perennial. We used the tri-state Blue Mountain Ecoregion as our study area and maps of potential vegetation from interpolated ground data, the National Land Cover Data Database, and ancillary NPP data derived from the MODIS satellite. MC2 historical simulations for the area overestimated woodland occurrence and underestimated shrubland and grassland PFT's. The spatial location of the rangeland PFT's also often did not align well with observational data. While some disagreement may be due to differences in the respective classification rules, the errors are largely linked to MC2's tree and grass biogeography and physiology algorithms. Presently, only grass and forest productivity measures and carbon stocks are used to distinguish PFT's. MC2 grass and tree productivity simulation is problematic, in particular grass seasonal phenology in relation to seasonal patterns of temperature and precipitation. The algorithm also does not accurately translate simulated carbon stocks into the canopy allometry of woodland tree species that dominate the BME, thereby inaccurately shading out the grasses in the understory. We are devising improvements to these shortcomings in the model architecture.

  19. Simulation and analysis of a proposed replacement for the McCook port of entry inspection station

    DOT National Transportation Integrated Search

    1999-04-01

    This report describes a study of a proposed replacement for the McCook Port of Entry inspection station at the entry to South Dakota. In order to assess the potential for a low-speed weigh in motion (WIM) scale within the station to pre-screen trucks...

  20. Using Computer-Based "Experiments" in the Analysis of Chemical Reaction Equilibria

    ERIC Educational Resources Information Center

    Li, Zhao; Corti, David S.

    2018-01-01

    The application of the Reaction Monte Carlo (RxMC) algorithm to standard textbook problems in chemical reaction equilibria is discussed. The RxMC method is a molecular simulation algorithm for studying the equilibrium properties of reactive systems, and therefore provides the opportunity to develop computer-based "experiments" for the…

  1. SU-E-T-274: Radiation Therapy with Very High-Energy Electron (VHEE) Beams in the Presence of Metal Implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, C; Palma, B; Qu, B

    2014-06-01

    Purpose: To evaluate the effect of metal implants on treatment plans for radiation therapy with very high-energy electron (VHEE) beams. Methods: The DOSXYZnrc/BEAMnrc Monte Carlo (MC) codes were used to simulate 50–150MeV VHEE beam dose deposition and its effects on steel and titanium (Ti) heterogeneities in a water phantom. Heterogeneities of thicknesses ranging from 0.5cm to 2cm were placed at 10cm depth. MC was also used to calculate electron and photon spectra generated by the VHEE beams' interaction with metal heterogeneities. The original VMAT patient dose calculation was planned in Eclipse. Patient dose calculations with MC-generated beamlets were planned usingmore » a Matlab GUI and research version of RayStation. VHEE MC treatment planning was performed on water-only geometry and water with segmented prostheses (steel and Ti) geometries with 100MeV and 150MeV beams. Results: 100MeV PDD 5cm behind steel/Ti heterogeneity was 51% less than in the water-only phantom. For some cases, dose enhancement lateral to the borders of the phantom increased the dose by up to 22% in steel and 18% in Ti heterogeneities. The dose immediately behind steel heterogeneity decreased by an average of 6%, although for 150MeV, the steel heterogeneity created a 23% increase in dose directly behind it. The average dose immediately behind Ti heterogeneities increased 10%. The prostate VHEE plans resulted in mean dose decrease to the bowel (20%), bladder (7%), and the urethra (5%) compared to the 15MV VMAT plan. The average dose to the body with prosthetic implants was 5% higher than to the body without implants. Conclusion: Based on MC simulations, metallic implants introduce dose perturbations to VHEE beams from lateral scatter and backscatter. However, when performing clinical planning on a prostate case, the use of multiple beams and inverse planning still produces VHEE plans that are dosimetrically superior to photon VMAT plans. BW Loo and P Maxim received research support from RaySearch laboratories; B Hardemark and E Hynning are employees of RaySearch.« less

  2. Monte Carlo simulation for scanning technique with scattering foil free electron beam: A proof of concept study

    PubMed Central

    Sung, Wonmo; Park, Jong In; Kim, Jung-in; Carlson, Joel; Ye, Sung-Joon

    2017-01-01

    This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans. PMID:28493940

  3. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    NASA Astrophysics Data System (ADS)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  4. Monte Carlo simulation for scanning technique with scattering foil free electron beam: A proof of concept study.

    PubMed

    Sung, Wonmo; Park, Jong In; Kim, Jung-In; Carlson, Joel; Ye, Sung-Joon; Park, Jong Min

    2017-01-01

    This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans.

  5. Characterization of protein-folding pathways by reduced-space modeling.

    PubMed

    Kmiecik, Sebastian; Kolinski, Andrzej

    2007-07-24

    Ab initio simulations of the folding pathways are currently limited to very small proteins. For larger proteins, some approximations or simplifications in protein models need to be introduced. Protein folding and unfolding are among the basic processes in the cell and are very difficult to characterize in detail by experiment or simulation. Chymotrypsin inhibitor 2 (CI2) and barnase are probably the best characterized experimentally in this respect. For these model systems, initial folding stages were simulated by using CA-CB-side chain (CABS), a reduced-space protein-modeling tool. CABS employs knowledge-based potentials that proved to be very successful in protein structure prediction. With the use of isothermal Monte Carlo (MC) dynamics, initiation sites with a residual structure and weak tertiary interactions were identified. Such structures are essential for the initiation of the folding process through a sequential reduction of the protein conformational space, overcoming the Levinthal paradox in this manner. Furthermore, nucleation sites that initiate a tertiary interactions network were located. The MC simulations correspond perfectly to the results of experimental and theoretical research and bring insights into CI2 folding mechanism: unambiguous sequence of folding events was reported as well as cooperative substructures compatible with those obtained in recent molecular dynamics unfolding studies. The correspondence between the simulation and experiment shows that knowledge-based potentials are not only useful in protein structure predictions but are also capable of reproducing the folding pathways. Thus, the results of this work significantly extend the applicability range of reduced models in the theoretical study of proteins.

  6. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    PubMed

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-07

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.

  7. Monte Carlo dosimetric characterization of the Flexisource Co-60 high-dose-rate brachytherapy source using PENELOPE.

    PubMed

    Almansa, Julio F; Guerrero, Rafael; Torres, Javier; Lallena, Antonio M

    60 Co sources have been commercialized as an alternative to 192 Ir sources for high-dose-rate (HDR) brachytherapy. One of them is the Flexisource Co-60 HDR source manufactured by Elekta. The only available dosimetric characterization of this source is that of Vijande et al. [J Contemp Brachytherapy 2012; 4:34-44], whose results were not included in the AAPM/ESTRO consensus document. In that work, the dosimetric quantities were calculated as averages of the results obtained with the Geant4 and PENELOPE Monte Carlo (MC) codes, though for other sources, significant differences have been quoted between the values obtained with these two codes. The aim of this work is to perform the dosimetric characterization of the Flexisource Co-60 HDR source using PENELOPE. The MC simulation code PENELOPE (v. 2014) has been used. Following the recommendations of the AAPM/ESTRO report, the radial dose function, the anisotropy function, the air-kerma strength, the dose rate constant, and the absorbed dose rate in water have been calculated. The results we have obtained exceed those of Vijande et al. In particular, the absorbed dose rate constant is ∼0.85% larger. A similar difference is also found in the other dosimetric quantities. The effect of the electrons emitted in the decay of 60 Co, usually neglected in this kind of simulations, is significant up to the distances of 0.25 cm from the source. The systematic and significant differences we have found between PENELOPE results and the average values found by Vijande et al. point out that the dosimetric characterizations carried out with the various MC codes should be provided independently. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  8. Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method

    PubMed Central

    2016-01-01

    A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709

  9. Accelerated prompt gamma estimation for clinical proton therapy simulations.

    PubMed

    Huisman, Brent F B; Létang, J M; Testa, É; Sarrut, D

    2016-11-07

    There is interest in the particle therapy community in using prompt gammas (PGs), a natural byproduct of particle treatment, for range verification and eventually dose control. However, PG production is a rare process and therefore estimation of PGs exiting a patient during a proton treatment plan executed by a Monte Carlo (MC) simulation converges slowly. Recently, different approaches to accelerating the estimation of PG yield have been presented. Sterpin et al (2015 Phys. Med. Biol. 60 4915-46) described a fast analytic method, which is still sensitive to heterogeneities. El Kanawati et al (2015 Phys. Med. Biol. 60 8067-86) described a variance reduction method (pgTLE) that accelerates the PG estimation by precomputing PG production probabilities as a function of energy and target materials, but has as a drawback that the proposed method is limited to analytical phantoms. We present a two-stage variance reduction method, named voxelized pgTLE (vpgTLE), that extends pgTLE to voxelized volumes. As a preliminary step, PG production probabilities are precomputed once and stored in a database. In stage 1, we simulate the interactions between the treatment plan and the patient CT with low statistic MC to obtain the spatial and spectral distribution of the PGs. As primary particles are propagated throughout the patient CT, the PG yields are computed in each voxel from the initial database, as a function of the current energy of the primary, the material in the voxel and the step length. The result is a voxelized image of PG yield, normalized to a single primary. The second stage uses this intermediate PG image as a source to generate and propagate the number of PGs throughout the rest of the scene geometry, e.g. into a detection device, corresponding to the number of primaries desired. We achieved a gain of around 10 3 for both a geometrical heterogeneous phantom and a complete patient CT treatment plan with respect to analog MC, at a convergence level of 2% relative uncertainty in the 90% yield region. The method agrees with reference analog MC simulations to within 10 -4 , with negligible bias. Gains per voxel range from 10 2 to 10 4 . The presented generic PG yield estimator is drop-in usable with any geometry and beam configuration. We showed a gain of three orders of magnitude compared to analog MC. With a large number of voxels and materials, memory consumption may be a concern and we discuss the consequences and possible tradeoffs. The method is available as part of Gate 7.2.

  10. Accelerated prompt gamma estimation for clinical proton therapy simulations

    NASA Astrophysics Data System (ADS)

    Huisman, Brent F. B.; Létang, J. M.; Testa, É.; Sarrut, D.

    2016-11-01

    There is interest in the particle therapy community in using prompt gammas (PGs), a natural byproduct of particle treatment, for range verification and eventually dose control. However, PG production is a rare process and therefore estimation of PGs exiting a patient during a proton treatment plan executed by a Monte Carlo (MC) simulation converges slowly. Recently, different approaches to accelerating the estimation of PG yield have been presented. Sterpin et al (2015 Phys. Med. Biol. 60 4915-46) described a fast analytic method, which is still sensitive to heterogeneities. El Kanawati et al (2015 Phys. Med. Biol. 60 8067-86) described a variance reduction method (pgTLE) that accelerates the PG estimation by precomputing PG production probabilities as a function of energy and target materials, but has as a drawback that the proposed method is limited to analytical phantoms. We present a two-stage variance reduction method, named voxelized pgTLE (vpgTLE), that extends pgTLE to voxelized volumes. As a preliminary step, PG production probabilities are precomputed once and stored in a database. In stage 1, we simulate the interactions between the treatment plan and the patient CT with low statistic MC to obtain the spatial and spectral distribution of the PGs. As primary particles are propagated throughout the patient CT, the PG yields are computed in each voxel from the initial database, as a function of the current energy of the primary, the material in the voxel and the step length. The result is a voxelized image of PG yield, normalized to a single primary. The second stage uses this intermediate PG image as a source to generate and propagate the number of PGs throughout the rest of the scene geometry, e.g. into a detection device, corresponding to the number of primaries desired. We achieved a gain of around 103 for both a geometrical heterogeneous phantom and a complete patient CT treatment plan with respect to analog MC, at a convergence level of 2% relative uncertainty in the 90% yield region. The method agrees with reference analog MC simulations to within 10-4, with negligible bias. Gains per voxel range from 102 to 104. The presented generic PG yield estimator is drop-in usable with any geometry and beam configuration. We showed a gain of three orders of magnitude compared to analog MC. With a large number of voxels and materials, memory consumption may be a concern and we discuss the consequences and possible tradeoffs. The method is available as part of Gate 7.2.

  11. One-dimensional simulation of stratification and dissolved oxygen in McCook Reservoir, Illinois

    USGS Publications Warehouse

    Robertson, Dale M.

    2000-01-01

    As part of the Chicagoland Underflow Plan/Tunnel and Reservoir Plan, the U.S. Army Corps of Engineers, Chicago District, plans to build McCook Reservoir.a flood-control reservoir to store combined stormwater and raw sewage (combined sewage). To prevent the combined sewage in the reservoir from becoming anoxic and producing hydrogen sulfide gas, a coarse-bubble aeration system will be designed and installed on the basis of results from CUP 0-D, a zero-dimensional model, and MAC3D, a three-dimensional model. Two inherent assumptions in the application of MAC3D are that density stratification in the simulated water body is minimal or not present and that surface heat transfers are unimportant and, therefore, may be neglected. To test these assumptions, the previously tested, one-dimensional Dynamic Lake Model (DLM) was used to simulate changes in temperature and dissolved oxygen in the reservoir after a 1-in-100-year event. Results from model simulations indicate that the assumptions made in MAC3D application are valid as long as the aeration system, with an air-flow rate of 1.2 cubic meters per second or more, is operated while the combined sewage is stored in the reservoir. Results also indicate that the high biochemical oxygen demand of the combined sewage will quickly consume the dissolved oxygen stored in the reservoir and the dissolved oxygen transferred through the surface of the reservoir; therefore, oxygen must be supplied by either the rising bubbles of the aeration system (a process not incorporated in DLM) or some other technique to prevent anoxia.

  12. Fred: a GPU-accelerated fast-Monte Carlo code for rapid treatment plan recalculation in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Schiavi, A.; Senzacqua, M.; Pioli, S.; Mairani, A.; Magro, G.; Molinelli, S.; Ciocca, M.; Battistoni, G.; Patera, V.

    2017-09-01

    Ion beam therapy is a rapidly growing technique for tumor radiation therapy. Ions allow for a high dose deposition in the tumor region, while sparing the surrounding healthy tissue. For this reason, the highest possible accuracy in the calculation of dose and its spatial distribution is required in treatment planning. On one hand, commonly used treatment planning software solutions adopt a simplified beam-body interaction model by remapping pre-calculated dose distributions into a 3D water-equivalent representation of the patient morphology. On the other hand, Monte Carlo (MC) simulations, which explicitly take into account all the details in the interaction of particles with human tissues, are considered to be the most reliable tool to address the complexity of mixed field irradiation in a heterogeneous environment. However, full MC calculations are not routinely used in clinical practice because they typically demand substantial computational resources. Therefore MC simulations are usually only used to check treatment plans for a restricted number of difficult cases. The advent of general-purpose programming GPU cards prompted the development of trimmed-down MC-based dose engines which can significantly reduce the time needed to recalculate a treatment plan with respect to standard MC codes in CPU hardware. In this work, we report on the development of fred, a new MC simulation platform for treatment planning in ion beam therapy. The code can transport particles through a 3D voxel grid using a class II MC algorithm. Both primary and secondary particles are tracked and their energy deposition is scored along the trajectory. Effective models for particle-medium interaction have been implemented, balancing accuracy in dose deposition with computational cost. Currently, the most refined module is the transport of proton beams in water: single pencil beam dose-depth distributions obtained with fred agree with those produced by standard MC codes within 1-2% of the Bragg peak in the therapeutic energy range. A comparison with measurements taken at the CNAO treatment center shows that the lateral dose tails are reproduced within 2% in the field size factor test up to 20 cm. The tracing kernel can run on GPU hardware, achieving 10 million primary s-1 on a single card. This performance allows one to recalculate a proton treatment plan at 1% of the total particles in just a few minutes.

  13. SU-F-T-50: Evaluation of Monte Carlo Simulations Performance for Pediatric Brachytherapy Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatzipapas, C; Kagadis, G; Papadimitroulas, P

    Purpose: Pediatric tumors are generally treated with multi-modal procedures. Brachytherapy can be used with pediatric tumors, especially given that in this patient population low toxicity on normal tissues is critical as is the suppression of the probability for late malignancies. Our goal is to validate the GATE toolkit on realistic brachytherapy applications, and evaluate brachytherapy plans on pediatrics for accurate dosimetry on sensitive and critical organs of interest. Methods: The GATE Monte Carlo (MC) toolkit was used. Two High Dose Rate (HDR) 192Ir brachytherapy sources were simulated (Nucletron mHDR-v1 and Varian VS2000), and fully validated using the AAPM and ESTROmore » protocols. A realistic brachytherapy plan was also simulated using the XCAT anthropomorphic computational model .The simulated data were compared to the clinical dose points. Finally, a 14 years old girl with vaginal rhabdomyosarcoma was modelled based on clinical procedures for the calculation of the absorbed dose per organ. Results: The MC simulations resulted in accurate dosimetry in terms of dose rate constant (Λ), radial dose gL(r) and anisotropy function F(r,θ) for both sources.The simulations were executed using ∼1010 number of primaries resulting in statistical uncertainties lower than 2%.The differences between the theoretical values and the simulated ones ranged from 0.01% up to 3.3%, with the largest discrepancy (6%) being observed in the dose rate constant calculation.The simulated DVH using an adult female XCAT model was also compared to a clinical one resulting in differences smaller than 5%. Finally, a realistic pediatric brachytherapy simulation was performed to evaluate the absorbed dose per organ and to calculate DVH with respect to heterogeneities of the human anatomy. Conclusion: GATE is a reliable tool for brachytherapy simulations both for source modeling and for dosimetry in anthropomorphic voxelized models. Our project aims to evaluate a variety of pediatric brachytherapy schemes using a population of pediatric phantoms for several pathological cases. This study is part of a project that has received funding from the European Union Horizon2020 research and innovation programme under the MarieSklodowska-Curiegrantagreement.No691203.The results published in this study reflect only the authors view and the Research Executive Agency (REA) and the European Commission is not responsible for any use that may be madeof the information it contains.« less

  14. A modified implementation of tristate inverter based static master-slave flip-flop with improved power-delay-area product.

    PubMed

    Singh, Kunwar; Tiwari, Satish Chandra; Gupta, Maneesha

    2014-01-01

    The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C(2)MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C(2)MOS based flip-flop designs mC(2)MOSff1 and mC(2)MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC(2)MOSff1. Postlayout simulations indicate that mC(2)MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes.

  15. A Modified Implementation of Tristate Inverter Based Static Master-Slave Flip-Flop with Improved Power-Delay-Area Product

    PubMed Central

    Tiwari, Satish Chandra; Gupta, Maneesha

    2014-01-01

    The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C2MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C2MOS based flip-flop designs mC2MOSff1 and mC2MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC2MOSff1. Postlayout simulations indicate that mC2MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes. PMID:24723808

  16. Influence of the light propagation models on a linearized photoacoustic image reconstruction of the light absorption coefficient

    NASA Astrophysics Data System (ADS)

    Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya

    2015-03-01

    Quantification of the optical properties of the tissues and blood by noninvasive photoacoustic (PA) imaging may provide useful information for screening and early diagnosis of diseases. Linearized 2D image reconstruction algorithm based on PA wave equation and the photon diffusion equation (PDE) can reconstruct the image with computational cost smaller than a method based on 3D radiative transfer equation. However, the reconstructed image is affected by the differences between the actual and assumed light propagations. A quantitative capability of a linearized 2D image reconstruction was investigated and discussed by the numerical simulations and the phantom experiment in this study. The numerical simulations with the 3D Monte Carlo (MC) simulation and the 2D finite element calculation of the PDE were carried out. The phantom experiment was also conducted. In the phantom experiment, the PA pressures were acquired by a probe which had an optical fiber for illumination and the ring shaped P(VDF-TrFE) ultrasound transducer. The measured object was made of Intralipid and Indocyanine green. In the numerical simulations, it was shown that the linearized image reconstruction method recovered the absorption coefficients with alleviating the dependency of the PA amplitude on the depth of the photon absorber. The linearized image reconstruction method worked effectively under the light propagation calculated by 3D MC simulation, although some errors occurred. The phantom experiments validated the result of the numerical simulations.

  17. An interface for simulating radiative transfer in and around volcanic plumes with the Monte Carlo radiative transfer model McArtim

    USGS Publications Warehouse

    Kern, Christoph

    2016-03-23

    This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.

  18. Office-Cycling: A Promising Way to Raise Pain Thresholds and Increase Metabolism with Minimal Compromising of Work Performance

    PubMed Central

    Nyberg, André; Hedlund, Mattias; Häger, Charlotte K.; McDonough, Suzanne; Björklund, Martin

    2018-01-01

    Aim Establishing the effects of low intensity cycling (LC), moderate intensity cycling (MC), and standing at a simulated office workstation on pain modulation, work performance, and metabolic expenditure. Methods 36 healthy adults (21 females), mean age 26.8 (SD 7.6) years, partook in this randomized 3 × 3 crossover trial with 75 minutes of LC on 20% of maximum aerobic power (MAP) output, 30 minutes of MC on 50% of MAP, and standing 30 minutes with 48-hour wash-out periods. Outcome measures were pain modulation (pressure pain threshold (PPT) and thermal pain threshold)), work performance (transcription, mouse pointing, and cognitive performance), and metabolic expenditure. Results PPTs increased in all conditions. PPT trapezius showed the highest increase after LC, 39.3 kilopascals (kPa) (15.6; 78.6), compared to MC, 17.0 kPa (2.8; 49.9), and standing, 16.8 kPa (−5.6; 39.4), p = 0.015. Transcription was reduced during LC and MC. Mouse pointing precision was best during standing and worst and slowest during MC. Cognitive performance did not differ between conditions. Metabolic expenditure rates were 1.4 (1.3; 1.7), 3.3 (2.3; 3.7), and 7.5 (5.8; 8.7) kcal/minute during standing, LC, and MC, respectively (p < 0.001). Conclusions LC seems to be the preferred option; it raised PPTs, more than doubled metabolic expenditure, whilst minimally influencing work performance. PMID:29607323

  19. Can Male Circumcision Have an Impact on the HIV Epidemic in Men Who Have Sex with Men?

    PubMed Central

    Goodreau, Steven M.; Carnegie, Nicole B.; Vittinghoff, Eric; Lama, Javier R.; Fuchs, Jonathan D.; Sanchez, Jorge; Buchbinder, Susan P.

    2014-01-01

    Background Three trials have demonstrated the prophylactic effect of male circumcision (MC) for HIV acquisition among heterosexuals, and MC interventions are underway throughout sub-Saharan Africa. Similar efforts for men who have sex with men (MSM) are stymied by the potential for circumcised MSM to acquire HIV easily through receptive sex and transmit easily through insertive sex. Existing work suggests that MC for MSM should reach its maximum potential in settings where sexual role segregation is historically high and relatively stable across the lifecourse; HIV incidence among MSM is high; reported willingness for prophylactic circumcision is high; and pre-existing circumcision rates are low. We aim to identify the likely public health impact that MC interventions among MSM would have in one setting that fulfills these conditions—Peru—as a theoretical upper bound for their effectiveness among MSM generally. Methods and Findings We use a dynamic, stochastic sexual network model based in exponential-family random graph modeling and parameterized from multiple behavioral surveys of Peruvian MSM. We consider three enrollment criteria (insertive during 100%, >80% or >60% of UAI) and two levels of uptake (25% and 50% of eligible men); we explore sexual role proportions from two studies and different frequencies of switching among role categories. Each scenario is simulated 10 times. We estimate that efficiency could reach one case averted per 6 circumcisions. However, the population-level impact of an optimistic MSM-MC intervention in this setting would likely be at most ∼5–10% incidence and prevalence reductions over 25 years. Conclusions Roll-out of MC for MSM in Peru would not result in a substantial reduction in new HIV infections, despite characteristics in this population that could maximize such effects. Additional studies are needed to confirm these results for other MSM populations, and providers may consider the individual health benefits of offering MC to their MSM patients. PMID:25076493

  20. Can male circumcision have an impact on the HIV epidemic in men who have sex with men?

    PubMed

    Goodreau, Steven M; Carnegie, Nicole B; Vittinghoff, Eric; Lama, Javier R; Fuchs, Jonathan D; Sanchez, Jorge; Buchbinder, Susan P

    2014-01-01

    Three trials have demonstrated the prophylactic effect of male circumcision (MC) for HIV acquisition among heterosexuals, and MC interventions are underway throughout sub-Saharan Africa. Similar efforts for men who have sex with men (MSM) are stymied by the potential for circumcised MSM to acquire HIV easily through receptive sex and transmit easily through insertive sex. Existing work suggests that MC for MSM should reach its maximum potential in settings where sexual role segregation is historically high and relatively stable across the lifecourse; HIV incidence among MSM is high; reported willingness for prophylactic circumcision is high; and pre-existing circumcision rates are low. We aim to identify the likely public health impact that MC interventions among MSM would have in one setting that fulfills these conditions-Peru-as a theoretical upper bound for their effectiveness among MSM generally. We use a dynamic, stochastic sexual network model based in exponential-family random graph modeling and parameterized from multiple behavioral surveys of Peruvian MSM. We consider three enrollment criteria (insertive during 100%, >80% or >60% of UAI) and two levels of uptake (25% and 50% of eligible men); we explore sexual role proportions from two studies and different frequencies of switching among role categories. Each scenario is simulated 10 times. We estimate that efficiency could reach one case averted per 6 circumcisions. However, the population-level impact of an optimistic MSM-MC intervention in this setting would likely be at most ∼5-10% incidence and prevalence reductions over 25 years. Roll-out of MC for MSM in Peru would not result in a substantial reduction in new HIV infections, despite characteristics in this population that could maximize such effects. Additional studies are needed to confirm these results for other MSM populations, and providers may consider the individual health benefits of offering MC to their MSM patients.

  1. Dosimetric impact of the low-dose envelope of scanned proton beams at a ProBeam facility: comparison of measurements with TPS and MC calculations.

    PubMed

    Würl, M; Englbrecht, F; Parodi, K; Hillbrand, M

    2016-01-21

    Due to the low-dose envelope of scanned proton beams, the dose output depends on the size of the irradiated field or volume. While this field size dependence has already been extensively investigated by measurements and Monte Carlo (MC) simulations for single pencil beams or monoenergetic fields, reports on the relevance of this effect for analytical dose calculation models are limited. Previous studies on this topic only exist for specific beamline designs. However, the amount of large-angle scattered primary and long-range secondary particles and thus the relevance of the low-dose envelope can considerably be influenced by the particular design of the treatment nozzle. In this work, we therefore addressed the field size dependence of the dose output at the commercially available ProBeam(®) beamline, which is being built in several facilities worldwide. We compared treatment planning dose calculations with ionization chamber (IC) measurements and MC simulations, using an experimentally validated FLUKA MC model of the scanning beamline. To this aim, monoenergetic square fields of three energies, as well as spherical target volumes were studied, including the investigation on the influence of the lateral spot spacing on the field size dependence. For the spherical target volumes, MC as well as analytical dose calculation were found in excellent agreement with the measurements in the center of the spread-out Bragg peak. In the plateau region, the treatment planning system (TPS) tended to overestimate the dose compared to MC calculations and IC measurements by up to almost 5% for the smallest investigated sphere and for small monoenergetic square fields. Narrower spot spacing slightly enhanced the field size dependence of the dose output. The deviations in the plateau dose were found to go in the clinically safe direction, i.e. the actual deposited dose outside the target was found to be lower than predicted by the TPS. Thus, the moderate overestimation of dose to normal tissue by the TPS is likely to result in no severe consequences in clinical cases, even for the most critical cases of small target volumes.

  2. Field Validity and Feasibility of Four Techniques for the Detection of Trichuris in Simians: A Model for Monitoring Drug Efficacy in Public Health?

    PubMed Central

    Levecke, Bruno; De Wilde, Nathalie; Vandenhoute, Els; Vercruysse, Jozef

    2009-01-01

    Background Soil-transmitted helminths, such as Trichuris trichiura, are of major concern in public health. Current efforts to control these helminth infections involve periodic mass treatment in endemic areas. Since these large-scale interventions are likely to intensify, monitoring the drug efficacy will become indispensible. However, studies comparing detection techniques based on sensitivity, fecal egg counts (FEC), feasibility for mass diagnosis and drug efficacy estimates are scarce. Methodology/Principal Findings In the present study, the ether-based concentration, the Parasep Solvent Free (SF), the McMaster and the FLOTAC techniques were compared based on both validity and feasibility for the detection of Trichuris eggs in 100 fecal samples of nonhuman primates. In addition, the drug efficacy estimates of quantitative techniques was examined using a statistical simulation. Trichuris eggs were found in 47% of the samples. FLOTAC was the most sensitive technique (100%), followed by the Parasep SF (83.0% [95% confidence interval (CI): 82.4–83.6%]) and the ether-based concentration technique (76.6% [95% CI: 75.8–77.3%]). McMaster was the least sensitive (61.7% [95% CI: 60.7–62.6%]) and failed to detect low FEC. The quantitative comparison revealed a positive correlation between the four techniques (Rs = 0.85–0.93; p<0.0001). However, the ether-based concentration technique and the Parasep SF detected significantly fewer eggs than both the McMaster and the FLOTAC (p<0.0083). Overall, the McMaster was the most feasible technique (3.9 min/sample for preparing, reading and cleaning of the apparatus), followed by the ether-based concentration technique (7.7 min/sample) and the FLOTAC (9.8 min/sample). Parasep SF was the least feasible (17.7 min/sample). The simulation revealed that the sensitivity is less important for monitoring drug efficacy and that both FLOTAC and McMaster were reliable estimators. Conclusions/Significance The results of this study demonstrated that McMaster is a promising technique when making use of FEC to monitor drug efficacy in Trichuris. PMID:19172171

  3. Vibrationally high-resolved electronic spectra of MCl2 (M=C, Si, Ge, Sn, Pb) and photoelectron spectra of MCl2(.).

    PubMed

    Ran, Yibin; Pang, Min; Shen, Wei; Li, Ming; He, Rongxing

    2016-10-05

    We systematically studied the vibrational-resolved electronic spectra of group IV dichlorides using the Franck-Condon approximation combined with the Duschinsky and Herzberg-Teller effects in harmonic and anharmonic frameworks (only the simulation of absorption spectra includes the anharmonicity). Calculated results showed that the band shapes of simulated spectra are in accordance with those of the corresponding experimental or theoretical ones. We found that the symmetric bend mode in progression of absorption is the most active one, whereas the main contributor in photoelectron spectra is the symmetric stretching mode. Moreover, the Duschinsky and anharmonic effects exert weak influence on the absorption spectra, except for PbCl2 molecule. The theoretical insights presented in this work are significant in understanding the photophysical properties of MCl2 (M=C, Si, Ge, Sn, Pb) and studying the Herzberg-Teller and the anharmonic effects on the absorption spectra of new dichlorides of this main group. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Vibration study of a vehicle suspension assembly with the finite element method

    NASA Astrophysics Data System (ADS)

    Cătălin Marinescu, Gabriel; Castravete, Ştefan-Cristian; Dumitru, Nicolae

    2017-10-01

    The main steps of the present work represent a methodology of analysing various vibration effects over suspension mechanical parts of a vehicle. A McPherson type suspension from an existing vehicle was created using CAD software. Using the CAD model as input, a finite element model of the suspension assembly was developed. Abaqus finite element analysis software was used to pre-process, solve, and post-process the results. Geometric nonlinearities are included in the model. Severe sources of nonlinearities such us friction and contact are also included in the model. The McPherson spring is modelled as linear spring. The analysis include several steps: preload, modal analysis, the reduction of the model to 200 generalized coordinates, a deterministic external excitation, a random excitation that comes from different types of roads. The vibration data used as an input for the simulation were previously obtained by experimental means. Mathematical expressions used for the simulation were also presented in the paper.

  5. An innovative iterative thresholding algorithm for tumour segmentation and volumetric quantification on SPECT images: Monte Carlo-based methodology and validation.

    PubMed

    Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E

    2011-06-01

    Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.

  6. Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance.

    ERIC Educational Resources Information Center

    Cheung, Gordon W.; Rensvold, Roger B.

    2002-01-01

    Examined 20 goodness-of-fit indexes based on the minimum fit function using a simulation under the 2-group situation. Results support the use of the delta comparative fit index, delta Gamma hat, and delta McDonald's Noncentrality Index to evaluation measurement invariance. These three approaches are independent of model complexity and sample size.…

  7. Fast simulation of yttrium-90 bremsstrahlung photons with GATE.

    PubMed

    Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan

    2010-06-01

    Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum. Despite its limited accuracy, the fast bremsstrahlung photon generator is well suited for the simulation of bremsstrahlung photons emitted in large homogeneous organs, such as the liver, and detected in a scintillation camera. The computational acceleration makes it very useful for future investigations of 90Y bremsstrahlung SPECT imaging.

  8. Performance of McRAS-AC in the GEOS-5 AGCM: Part 1, Aerosol-Activated Cloud Microphysics, Precipitation, Radiative Effects, and Circulation

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Lee, D.; Oreopoulos, L.; Barahona, D.; Nenes, A.; Suarez, M. J.

    2012-01-01

    A revised version of the Microphysics of clouds with Relaxed Arakawa-Schubert and Aerosol-Cloud interaction (McRAS-AC), including, among others, the Barahona and Nenes ice nucleation parameterization, is implemented in the GEOS-5 AGCM. Various fields from a 10-year long integration of the AGCM with McRAS-AC were compared with their counterparts from an integration of the baseline GEOS-5 AGCM, and with satellite data as observations. Generally using McRAS-AC reduced biases in cloud fields and cloud radiative effects are much better over most of the regions of the Earth. Two weaknesses are identified in the McRAS-AC runs, namely, too few cloud particles around 40S-60S, and too high cloud water path during northern hemisphere summer over the Gulf Stream and North Pacific. Sensitivity analyses showed that these biases potentially originated from biases in the aerosol input. The first bias is largely eliminated in a sensitivity test using 50% smaller aerosol particles, while the second bias is much reduced when interactive aerosol chemistry was turned on. The main drawback of McRAS-AC is dearth of low-level marine stratus clouds, probably due to lack of dry-convection, not yet implemented into the cloud scheme. Despite these biases, McRAS-AC does simulate realistic clouds and their optical properties that can improve with better aerosol-input and thereby has the potential to be a valuable tool for climate modeling research because of its aerosol indirect effect simulation capabilities involving prediction of cloud particle number concentration and effective particle size for both convective and stratiform clouds is quite realistic.

  9. Tracheal intubation in patients with cervical spine immobilization: A comparison of McGrath(®) video laryngoscope and Truview EVO2(®) laryngoscope.

    PubMed

    Bhola, Ruchi; Bhalla, Swaran; Gupta, Radha; Singh, Ishwar; Kumar, Sunil

    2014-05-01

    Literature suggests that glottic view is better when using McGrath(®) Video laryngoscope and Truview(®) in comparison with McIntosh blade. The purpose of this study was to evaluate the effectiveness of McGrath Video laryngoscope in comparison with Truview laryngoscope for tracheal intubation in patients with simulated cervical spine injury using manual in-line stabilisation. This prospective randomised study was undertaken in operation theatre of a tertiary referral centre after approval from the Institutional Review Board. A total of 100 consenting patients presenting for elective surgery requiring tracheal intubation were randomly assigned to undergo intubation using McGrath(®) Video laryngoscope (n = 50) or Truview(®) (n = 50) laryngoscope. In all patients, we applied manual-in-line stabilisation of the cervical spine throughout the airway management. Statistical testing was conducted with the statistical package for the social science system version SPSS 17.0. Demographic data, airway assessment and haemodynamics were compared using the Chi-square test. A P < 0.05 was considered significant. The time to successful intubation was less with McGrath video laryngoscope when compared to Truview (30.02 s vs. 38.72 s). However, there was no significant difference between laryngoscopic views obtained in both groups. The number of second intubation attempts required and incidence of complications were negligible with both devices. Success rate of intubation with both devices was 100%. Intubation with McGrath Video laryngoscope caused lesser alterations in haemodynamics. Both laryngoscopes are reliable in case of simulated cervical spine injury using manual-in-line stabilisation with 100% success rate and good glottic view.

  10. MMU development at the Martin Marietta plant in Denver, Colorado

    NASA Image and Video Library

    1980-07-25

    S80-36889 (24 July 1980) --- Astronaut Bruce McCandless II uses a simulator at Martin Marietta?s space center near Denver to develop flight techniques for a backpack propulsion unit that will be used on Space Shuttle flights. The manned maneuvering unit (MMU) training simulator allows astronauts to "fly missions" against a full scale mockup of a portion of the orbiter vehicle. Controls of the simulator are like those of the actual MMU. Manipulating them allows the astronaut to move in three straight-line directions and in pitch, yaw and roll. One possible application of the MMU is for an extravehicular activity chore to repair damaged tiles on the vehicle. McCandless is wearing an extravehicular mobility unit (EMU).

  11. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    PubMed

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.

  12. Multileaf collimator tongue-and-groove effect on depth and off-axis doses: A comparison of treatment planning data with measurements and Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hee Jung; Department of Biomedical Engineering, Seoul National University, Seoul; Department of Radiation Oncology, Soonchunhyang University Hospital, Seoul

    2015-01-01

    To investigate how accurately treatment planning systems (TPSs) account for the tongue-and-groove (TG) effect, Monte Carlo (MC) simulations and radiochromic film (RCF) measurements were performed for comparison with TPS results. Two commercial TPSs computed the TG effect for Varian Millennium 120 multileaf collimator (MLC). The TG effect on off-axis dose profile at 3 depths of solid water was estimated as the maximum depth and the full width at half maximum (FWHM) of the dose dip at an interleaf position. When compared with the off-axis dose of open field, the maximum depth of the dose dip for MC and RCF rangedmore » from 10.1% to 20.6%; the maximum depth of the dose dip gradually decreased by up to 8.7% with increasing depths of 1.5 to 10 cm and also by up to 4.1% with increasing off-axis distances of 0 to 13 cm. However, TPS results showed at most a 2.7% decrease for the same depth range and a negligible variation for the same off-axis distances. The FWHM of the dose dip was approximately 0.19 cm for MC and 0.17 cm for RCF, but 0.30 cm for Eclipse TPS and 0.45 cm for Pinnacle TPS. Accordingly, the integrated value of TG dose dip for TPS was larger than that for MC and RCF and almost invariant along the depths and off-axis distances. We concluded that the TG dependence on depth and off-axis doses shown in the MC and RCF results could not be appropriately modeled by the TPS versions in this study.« less

  13. SU-F-J-14: Kilovoltage Cone-Beam CT Dose Estimation of Varian On-Board Imager Using GMctdospp Monte Carlo Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Rangaraj, D

    2016-06-15

    Purpose: Although cone-beam CT (CBCT) imaging became popular in radiation oncology, its imaging dose estimation is still challenging. The goal of this study is to assess the kilovoltage CBCT doses using GMctdospp - an EGSnrc based Monte Carlo (MC) framework. Methods: Two Varian OBI x-ray tube models were implemented in the GMctpdospp framework of EGSnrc MC System. The x-ray spectrum of 125 kVp CBCT beam was acquired from an EGSnrc/BEAMnrc simulation and validated with IPEM report 78. Then, the spectrum was utilized as an input spectrum in GMctdospp dose calculations. Both full and half bowtie pre-filters of the OBI systemmore » were created by using egs-prism module. The x-ray tube MC models were verified by comparing calculated dosimetric profiles (lateral and depth) to ion chamber measurements for a static x-ray beam irradiation to a cuboid water phantom. An abdominal CBCT imaging doses was simulated in GMctdospp framework using a 5-year-old anthropomorphic phantom. The organ doses and effective dose (ED) from the framework were assessed and compared to the MOSFET measurements and convolution/superposition dose calculations. Results: The lateral and depth dose profiles in the water cuboid phantom were well matched within 6% except a few areas - left shoulder of the half bowtie lateral profile and surface of water phantom. The organ doses and ED from the MC framework were found to be closer to MOSFET measurements and CS calculations within 2 cGy and 5 mSv respectively. Conclusion: This study implemented and validated the Varian OBI x-ray tube models in the GMctdospp MC framework using a cuboid water phantom and CBCT imaging doses were also evaluated in a 5-year-old anthropomorphic phantom. In future study, various CBCT imaging protocols will be implemented and validated and consequently patient CT images will be used to estimate the CBCT imaging doses in patients.« less

  14. SU-F-J-95: Impact of Shape Complexity On the Accuracy of Gradient-Based PET Volume Delineation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dance, M; Wu, G; Gao, Y

    2016-06-15

    Purpose: Explore correlation of tumor complexity shape with PET target volume accuracy when delineated with gradient-based segmentation tool. Methods: A total of 24 clinically realistic digital PET Monte Carlo (MC) phantoms of NSCLC were used in the study. The phantom simulated 29 thoracic lesions (lung primary and mediastinal lymph nodes) of varying size, shape, location, and {sup 18}F-FDG activity. A program was developed to calculate a curvature vector along the outline and the standard deviation of this vector was used as a metric to quantify a shape’s “complexity score”. This complexity score was calculated for standard geometric shapes and MC-generatedmore » target volumes in PET phantom images. All lesions were contoured using a commercially available gradient-based segmentation tool and the differences in volume from the MC-generated volumes were calculated as the measure of the accuracy of segmentation. Results: The average absolute percent difference in volumes between the MC-volumes and gradient-based volumes was 11% (0.4%–48.4%). The complexity score showed strong correlation with standard geometric shapes. However, no relationship was found between the complexity score and the accuracy of segmentation by gradient-based tool on MC simulated tumors (R{sup 2} = 0.156). When the lesions were grouped into primary lung lesions and mediastinal/mediastinal adjacent lesions, the average absolute percent difference in volumes were 6% and 29%, respectively. The former group is more isolated and the latter is more surround by tissues with relatively high SUV background. Conclusion: The complexity shape of NSCLC lesions has little effect on the accuracy of the gradient-based segmentation method and thus is not a good predictor of uncertainty in target volume delineation. Location of lesion within a relatively high SUV background may play a more significant role in the accuracy of gradient-based segmentation.« less

  15. Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process

    NASA Astrophysics Data System (ADS)

    Schwarm, F.-W.; Ballhausen, R.; Falkner, S.; Schönherr, G.; Pottschmidt, K.; Wolff, M. T.; Becker, P. A.; Fürst, F.; Marcu-Cheatham, D. M.; Hemphill, P. B.; Sokolova-Lapa, E.; Dauser, T.; Klochkov, D.; Ferrigno, C.; Wilms, J.

    2017-05-01

    Context. Cyclotron resonant scattering features (CRSFs) are formed by scattering of X-ray photons off quantized plasma electrons in the strong magnetic field (of the order 1012 G) close to the surface of an accreting X-ray pulsar. Due to the complex scattering cross-sections, the line profiles of CRSFs cannot be described by an analytic expression. Numerical methods, such as Monte Carlo (MC) simulations of the scattering processes, are required in order to predict precise line shapes for a given physical setup, which can be compared to observations to gain information about the underlying physics in these systems. Aims: A versatile simulation code is needed for the generation of synthetic cyclotron lines. Sophisticated geometries should be investigatable by making their simulation possible for the first time. Methods: The simulation utilizes the mean free path tables described in the first paper of this series for the fast interpolation of propagation lengths. The code is parallelized to make the very time-consuming simulations possible on convenient time scales. Furthermore, it can generate responses to monoenergetic photon injections, producing Green's functions, which can be used later to generate spectra for arbitrary continua. Results: We develop a new simulation code to generate synthetic cyclotron lines for complex scenarios, allowing for unprecedented physical interpretation of the observed data. An associated XSPEC model implementation is used to fit synthetic line profiles to NuSTAR data of Cep X-4. The code has been developed with the main goal of overcoming previous geometrical constraints in MC simulations of CRSFs. By applying this code also to more simple, classic geometries used in previous works, we furthermore address issues of code verification and cross-comparison of various models. The XSPEC model and the Green's function tables are available online (see link in footnote, page 1).

  16. Validation of a Monte Carlo model used for simulating tube current modulation in computed tomography over a wide range of phantom conditions/challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.

    2014-11-01

    Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purposemore » of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadoura, Ahmad, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Sun, Shuyu, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Siripatana, Adil, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercriticalmore » isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH{sub 4}, N{sub 2}, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO{sub 2} and C{sub 2} H{sub 6}.« less

  18. Microcystin distribution in physical size class separations of natural plankton communities

    USGS Publications Warehouse

    Graham, J.L.; Jones, J.R.

    2007-01-01

    Phytoplankton communities in 30 northern Missouri and Iowa lakes were physically separated into 5 size classes (>100 ??m, 53-100 ??m, 35-53 ??m, 10-35 ??m, 1-10 ??m) during 15-21 August 2004 to determine the distribution of microcystin (MC) in size fractionated lake samples and assess how net collections influence estimates of MC concentration. MC was detected in whole water (total) from 83% of takes sampled, and total MC values ranged from 0.1-7.0 ??g/L (mean = 0.8 ??g/L). On average, MC in the > 100 ??m size class comprised ???40% of total MC, while other individual size classes contributed 9-20% to total MC. MC values decreased with size class and were significantly greater in the >100 ??m size class (mean = 0.5 ??g /L) than the 35-53 ??m (mean = 0.1 ??g/L), 10-35 ??m (mean = 0.0 ??g/L), and 1-10 ??m (mean = 0.0 ??g/L) size classes (p < 0.01). MC values in nets with 100-??m, 53-??m, 35-??m, and 10-??m mesh were cumulatively summed to simulate the potential bias of measuring MC with various size plankton nets. On average, a 100-??m net underestimated total MC by 51%, compared to 37% for a 53-??m net, 28% for a 35-??m net, and 17% for a 10-??m net. While plankton nets consistently underestimated total MC, concentration of algae with net sieves allowed detection of MC at low levels (???0.01 ??/L); 93% of lakes had detectable levels of MC in concentrated samples. Thus, small mesh plankton nets are an option for documenting MC occurrence, but whole water samples should be collected to characterize total MC concentrations. ?? Copyright by the North American Lake Management Society 2007.

  19. Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test

    NASA Astrophysics Data System (ADS)

    Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.

    We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.

  20. Simulating the characteristics of tropical cyclones over the South West Indian Ocean using a Stretched-Grid Global Climate Model

    NASA Astrophysics Data System (ADS)

    Maoyi, Molulaqhooa L.; Abiodun, Babatunde J.; Prusa, Joseph M.; Veitch, Jennifer J.

    2018-03-01

    Tropical cyclones (TCs) are one of the most devastating natural phenomena. This study examines the capability of a global climate model with grid stretching (CAM-EULAG, hereafter CEU) in simulating the characteristics of TCs over the South West Indian Ocean (SWIO). In the study, CEU is applied with a variable increment global grid that has a fine horizontal grid resolution (0.5° × 0.5°) over the SWIO and coarser resolution (1° × 1°—2° × 2.25°) over the rest of the globe. The simulation is performed for the 11 years (1999-2010) and validated against the Joint Typhoon Warning Center (JTWC) best track data, global precipitation climatology project (GPCP) satellite data, and ERA-Interim (ERAINT) reanalysis. CEU gives a realistic simulation of the SWIO climate and shows some skill in simulating the spatial distribution of TC genesis locations and tracks over the basin. However, there are some discrepancies between the observed and simulated climatic features over the Mozambique channel (MC). Over MC, CEU simulates a substantial cyclonic feature that produces a higher number of TC than observed. The dynamical structure and intensities of the CEU TCs compare well with observation, though the model struggles to produce TCs with a deep pressure centre as low as the observed. The reanalysis has the same problem. The model captures the monthly variation of TC occurrence well but struggles to reproduce the interannual variation. The results of this study have application in improving and adopting CEU for seasonal forecasting over the SWIO.

  1. Computer input and output files associated with ground-water-flow simulations of the Albuquerque Basin, central New Mexico, 1901-94, with projections to 2020; (supplement one to U.S. Geological Survey Water-resources investigations report 94-4251)

    USGS Publications Warehouse

    Kernodle, J.M.

    1996-01-01

    This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.). Output files resulting from the computer simulations are included for reference.

  2. Determination of the effective diffusivity of water in a poly (methyl methacrylate) membrane containing carbon nanotubes using kinetic Monte Carlo simulations.

    PubMed

    Mermigkis, Panagiotis G; Tsalikis, Dimitrios G; Mavrantzas, Vlasis G

    2015-10-28

    A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrix and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, D(eff), of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, D(eff) is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for D(eff) as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on D(eff) (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated them with the time needed for penetrant water molecules to explore the available large, fast-diffusing CNT pores before Fickian diffusion is reached.

  3. Determination of the effective diffusivity of water in a poly (methyl methacrylate) membrane containing carbon nanotubes using kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Mermigkis, Panagiotis G.; Tsalikis, Dimitrios G.; Mavrantzas, Vlasis G.

    2015-10-01

    A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrix and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, Deff, of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, Deff is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for Deff as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on Deff (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated them with the time needed for penetrant water molecules to explore the available large, fast-diffusing CNT pores before Fickian diffusion is reached.

  4. SU-F-T-364: Monte Carlo-Dose Verification of Volumetric Modulated Arc Therapy Plans Using AAPM TG-119 Test Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onizuka, R; Araki, F; Ohno, T

    2016-06-15

    Purpose: To investigate the Monte Carlo (MC)-based dose verification for VMAT plans by a treatment planning system (TPS). Methods: The AAPM TG-119 test structure set was used for VMAT plans by the Pinnacle3 (convolution/superposition), using a Synergy radiation head of a 6 MV beam with the Agility MLC. The Synergy was simulated with the EGSnrc/BEAMnrc code, and VMAT dose distributions were calculated with the EGSnrc/DOSXYZnrc code by the same irradiation conditions as TPS. VMAT dose distributions of TPS and MC were compared with those of EBT3 film, by 2-D gamma analysis of ±3%/3 mm criteria with a threshold of 30%more » of prescribed doses. VMAT dose distributions between TPS and MC were also compared by DVHs and 3-D gamma analysis of ±3%/3 mm criteria with a threshold of 10%, and 3-D passing rates for PTVs and OARs were analyzed. Results: TPS dose distributions differed from those of film, especially for Head & neck. The dose difference between TPS and film results from calculation accuracy for complex motion of MLCs like tongue and groove effect. In contrast, MC dose distributions were in good agreement with those of film. This is because MC can model fully the MLC configuration and accurately reproduce the MLC motion between control points in VMAT plans. D95 of PTV for Prostate, Head & neck, C-shaped, and Multi Target was 97.2%, 98.1%, 101.6%, and 99.7% for TPS and 95.7%, 96.0%, 100.6%, and 99.1% for MC, respectively. Similarly, 3-D gamma passing rates of each PTV for TPS vs. MC were 100%, 89.5%, 99.7%, and 100%, respectively. 3-D passing rates of TPS reduced for complex VMAT fields like Head & neck because MLCs are not modeled completely for TPS. Conclusion: MC-calculated VMAT dose distributions is useful for the 3-D dose verification of VMAT plans by TPS.« less

  5. Kinetic Monte Carlo simulation of self-organized pattern formation induced by ion beam sputtering using crater functions

    NASA Astrophysics Data System (ADS)

    Yang, Zhangcan; Lively, Michael A.; Allain, Jean Paul

    2015-02-01

    The production of self-organized nanostructures by ion beam sputtering has been of keen interest to researchers for many decades. Despite numerous experimental and theoretical efforts to understand ion-induced nanostructures, there are still many basic questions open to discussion, such as the role of erosion or curvature-dependent sputtering. In this work, a hybrid MD/kMC (molecular dynamics/kinetic Monte Carlo) multiscale atomistic model is developed to investigate these knowledge gaps, and its predictive ability is validated across the experimental parameter space. This model uses crater functions, which were obtained from MD simulations, to model the prompt mass redistribution due to single-ion impacts. Defect migration, which is missing from previous models that use crater functions, is treated by a kMC Arrhenius method. Using this model, a systematic study was performed for silicon bombarded by Ar+ ions of various energies (100 eV, 250 eV, 500 eV, 700 eV, and 1000 eV) at incidence angles of 0∘ to 80∘. The simulation results were compared with experimental findings, showing good agreement in many aspects of surface evolution, such as the phase diagram. The underestimation of the ripple wavelength by the simulations suggests that surface diffusion is not the main smoothening mechanism for ion-induced pattern formation. Furthermore, the simulated results were compared with moment-description continuum theory and found to give better results, as the simulation did not suffer from the same mathematical inconsistencies as the continuum model. The key finding was that redistributive effects are dominant in the formation of flat surfaces and parallel-mode ripples, but erosive effects are dominant at high angles when perpendicular-mode ripples are formed. Ion irradiation with simultaneous sample rotation was also simulated, resulting in arrays of square-ordered dots. The patterns obtained from sample rotation were strongly correlated to the rotation speed and to the pattern types formed without sample rotation, and a critical value of about 5 rpm was found between disordered ripples and square-ordered dots. Finally, simulations of dual-beam sputtering were performed, with the resulting patterns determined by the flux ratio of the two beams and the pattern types resulting from single-beam sputtering under the same conditions.

  6. The Atmospheric Chemistry of Methyl Chavicol (Estragole)

    NASA Astrophysics Data System (ADS)

    Bloss, W. J.; Alam, M. S.; Rickard, A. R.; Hamilton, J. F.; Pereira, K. F.; Camredon, M.; Munoz, A.; Vazquez, M.; Alacreu, P.; Rodenas, M.; Vera, T.

    2012-12-01

    The oxidation of volatile organic compounds (VOCs) leads to formation of ozone and secondary organic aerosols (SOA), with consequences for health, air quality, crop yields, atmospheric chemistry and radiative transfer. It is estimated that ca. 90 % of VOC emissions to the atmosphere originate from biogenic sources (BVOC); such emissions may increase under future climates. Recent field observations have identified Methyl Chavicol ("MC" hereafter, also known as Estragole; 1-allyl-4-methoxybenzene, C10H12O) as a major BVOC above pine forests in the USA [Bouvier-Brown et al., 2009], and within an oil palm plantation in Malaysian Borneo, where it was found that MC could represent the highest single floral contribution of reactive carbon to the atmosphere [Misztal et al., 2010]. Palm oil cultivation, and hence emissions of MC, may be expected to increase with societal food and biofuel demand. We present the results of a series of simulation chamber experiments to assess the atmospheric fate of MC. Experiments were performed in the EUPHORE (European Photoreactor) facility in Valencia, Spain (200 m3 outdoor smog chamber), investigating the degradation of MC by reaction with OH, O3 and NO3. An extensive range of measurement instrumentation was used to monitor precursor and product formation, including stable species (FTIR, PTR-MS, GC-FID and GC-MS), radical intermediates (LIF), inorganic components (NOx, O3, HONO (LOPAP and aerosol production (SMPS) and composition (PILS and filters; analysed offline by LC-MS and FTICR-MS). Experiments were conducted at a range of NOx:VOC ratios, and in the presence and absence of radical (OH) scavenger compounds. This chamber dataset is used to determine the rate constants for reaction of MC with OH, O3 and NO3, the ozonolysis radical yields, and identify the primary degradation products for each initiation route, alongside the aerosol mass yields. Aerosol composition measurements are analysed to identify markers for MC contributions to SOA formation in the ambient atmosphere. The results are compared with the (limited) previous smog chamber results, and discussed in the context of the recent field data on MC production and emissions. References Bouvier-Brown et al., Atmos. Chem. Phys. 9, 2061, 2009 Misztal et al., Atmos. Chem. Phys. 10, 4343, 2010.

  7. The DoE method as an efficient tool for modeling the behavior of monocrystalline Si-PV module

    NASA Astrophysics Data System (ADS)

    Kessaissia, Fatma Zohra; Zegaoui, Abdallah; Boutoubat, Mohamed; Allouache, Hadj; Aillerie, Michel; Charles, Jean-Pierre

    2018-05-01

    The objective of this paper is to apply the Design of Experiments (DoE) method to study and to obtain a predictive model of any marketed monocrystalline photovoltaic (mc-PV) module. This technique allows us to have a mathematical model that represents the predicted responses depending upon input factors and experimental data. Therefore, the DoE model for characterization and modeling of mc-PV module behavior can be obtained by just performing a set of experimental trials. The DoE model of the mc-PV panel evaluates the predictive maximum power, as a function of irradiation and temperature in a bounded domain of study for inputs. For the mc-PV panel, the predictive model for both one level and two levels were developed taking into account both influences of the main effect and the interactive effects on the considered factors. The DoE method is then implemented by developing a code under Matlab software. The code allows us to simulate, characterize, and validate the predictive model of the mc-PV panel. The calculated results were compared to the experimental data, errors were estimated, and an accurate validation of the predictive models was evaluated by the surface response. Finally, we conclude that the predictive models reproduce the experimental trials and are defined within a good accuracy.

  8. The effect of the magnetic topology of the Magnetic Clouds over the Solar Energetic Particle Events

    NASA Astrophysics Data System (ADS)

    Medina, J.; Hidalgo, M.; Blanco, J.; Rodriguez-Pacheco, J.

    2007-12-01

    We have simulated the effect of the magnetic topology of the Magnetic Clouds (MCs) over the solar energetic particle event (SEPe) fluxes (0.5-100 MeV) provided by solar flares. When a SEPe passes through a MC a characteristic behaviour in the data corresponding to the ion and electron fluxes is observed: a depression after a strong maximum of the flux. Using our cross-section circular and elliptical MC models we have tried to explain that effect, understanding the importance of the topology of the MC. In sight of the results of the preliminary analysis we conclude that the magnitude of the magnetic field seems not to play a significant role but the helicoidal topology associated with topology of the MCs. This work has been supported by the Spanish Comisión Internacional de Ciencia y Tecnologia (CICYT), grant ESP2005-07290-C02-01 and ESP2006-08459. This work is performed inside COST Action 724.

  9. Energetic Particles Events inside Magnetic Clouds

    NASA Astrophysics Data System (ADS)

    Medina, Jose; Hidalgo, Miguel Angel; Blanco, Juan Jose; Rodriguez-Pacheco, Javier

    The effect of the magnetic topology of the Magnetic Clouds (MCs) over the energetic particle event (EPe) fluxes (0.5-100 MeV) have been simulated. In the data corresponding to the ion and electron fluxes, a depression after a strong maximum is observed when a EPe passes through a MC. Using our cross-section circular and elliptical MC models (Journal of Geophysical Research 107(1), doi:10.1029/2001JA900100 (2002) and Solar Physics 207(1), 187-198 (2002)) we have tried to explain that effect, understanding the importance of the topology of the MC. In sight of the results of the preliminary analysis we conclude that the magnitude of the magnetic field seems not to play a significant role but the helicoidal topology associated with topology of the MCs. This work has been supported by the Spanish Comisín Internacional de o Ciencia y Tecnoloǵ (CICYT), grant ESP2005-07290-C02-01 and ESP2006-08459. This work ıa is performed inside COST Action 724.

  10. Statistical study of defects caused by primary knock-on atoms in fcc Cu and bcc W using molecular dynamics

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.

    2015-12-01

    We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.

  11. Kinetic Monte Carlo Simulation of Oxygen and Cation Diffusion in Yttria-Stabilized Zirconia

    NASA Technical Reports Server (NTRS)

    Good, Brian

    2011-01-01

    Yttria-stabilized zirconia (YSZ) is of interest to the aerospace community, notably for its application as a thermal barrier coating for turbine engine components. In such an application, diffusion of both oxygen ions and cations is of concern. Oxygen diffusion can lead to deterioration of a coated part, and often necessitates an environmental barrier coating. Cation diffusion in YSZ is much slower than oxygen diffusion. However, such diffusion is a mechanism by which creep takes place, potentially affecting the mechanical integrity and phase stability of the coating. In other applications, the high oxygen diffusivity of YSZ is useful, and makes the material of interest for use as a solid-state electrolyte in fuel cells. The kinetic Monte Carlo (kMC) method offers a number of advantages compared with the more widely known molecular dynamics simulation method. In particular, kMC is much more efficient for the study of processes, such as diffusion, that involve infrequent events. We describe the results of kinetic Monte Carlo computer simulations of oxygen and cation diffusion in YSZ. Using diffusive energy barriers from ab initio calculations and from the literature, we present results on the temperature dependence of oxygen and cation diffusivity, and on the dependence of the diffusivities on yttria concentration and oxygen sublattice vacancy concentration. We also present results of the effect on diffusivity of oxygen vacancies in the vicinity of the barrier cations that determine the oxygen diffusion energy barriers.

  12. Propagation and scattering of vector light beam in turbid scattering medium

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Milione, Giovanni; Meglinski, Igor; Alfano, Robert R.

    2014-03-01

    Due to its high sensitivity to subtle alterations in medium morphology the vector light beams have recently gained much attention in the area of photonics. This leads to development of a new non-invasive optical technique for tissue diagnostics. Conceptual design of the particular experimental systems requires careful selection of various technical parameters, including beam structure, polarization, coherence, wavelength of incident optical radiation, as well as an estimation of how the spatial and temporal structural alterations in biological tissues can be distinguished by variations of these parameters. Therefore, an accurate realistic description of vector light beams propagation within tissue-like media is required. To simulate and mimic the propagation of vector light beams within the turbid scattering media the stochastic Monte Carlo (MC) technique has been used. In current report we present the developed MC model and the results of simulation of different vector light beams propagation in turbid tissue-like scattering media. The developed MC model takes into account the coherent properties of light, the influence of reflection and refraction at the medium boundary, helicity flip of vortexes and their mutual interference. Finally, similar to the concept of higher order Poincaŕe sphere (HOPS), to link the spatial distribution of the intensity of the backscattered vector light beam and its state of polarization on the medium surface we introduced the color-coded HOPS.

  13. Maier-Saupe model of polymer nematics: Comparing free energies calculated with Self Consistent Field theory and Monte Carlo simulations.

    PubMed

    Greco, Cristina; Jiang, Ying; Chen, Jeff Z Y; Kremer, Kurt; Daoulas, Kostas Ch

    2016-11-14

    Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.

  14. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Assessment of contrast gain signature in inferred magnocellular and parvocellular pathways in patients with glaucoma.

    PubMed

    Sun, Hao; Swanson, William H; Arvidson, Brian; Dul, Mitchell W

    2008-11-01

    Contrast gain signatures of inferred magnocellular and parvocellular postreceptoral pathways were assessed for patients with glaucoma using a contrast discrimination paradigm developed by Pokorny and Smith. The potential causes for changes in contrast gain signature were investigated using model simulations of ganglion cell contrast responses. Foveal contrast discrimination thresholds were measured with a pedestal-Delta-pedestal paradigm developed by Pokorny and Smith [Pokorny, J., & Smith, V. C. (1997). Psychophysical signatures associated with magnocellular and parvocellular pathway contrast gain. Journal of the Optical Society of America A, 14(9), 2477-2486]. Stimuli were 27 ms luminance increments superimposed on 227 ms pulsed Delta-pedestals. Contrast thresholds and contrast gain signatures mediated by the inferred magnocellular (MC) and parvocellular (PC) pathways were assessed using linear fits to contrast discrimination thresholds at either lower or higher Delta-pedestal contrasts, respectively. Twenty-seven patients with glaucoma were tested, as well as 16 age-similar control subjects free of eye disease. Contrast sensitivity and contrast gain signature mediated by the inferred MC pathway were lower for the glaucoma group, and reduced contrast gain signature was correlated with reduced contrast sensitivity (r(2)=45%, p<.0005). These two parameters mediated by the inferred PC pathway were little affected for the glaucoma group. Model simulations suggest that the reduced contrast sensitivity and contrast gain signature were consistent with the hypothesis that reduced MC ganglion cell dendritic complexity can lead to reduced effective retinal illuminance, and hence increased semi-saturation contrast of the ganglion cell contrast response functions. The contrast sensitivity and contrast gain signature of the inferred MC pathway were reduced in patients with glaucoma. The results were consistent with a model of ganglion cell dysfunction due to reduced synaptic density.

  16. Calculations of absorbed fractions in small water spheres for low-energy monoenergetic electrons and the Auger-emitting radionuclides (123)Ι and (125)Ι.

    PubMed

    Bousis, Christos; Emfietzoglou, Dimitris; Nikjoo, Hooshang

    2012-12-01

    To calculate the absorbed fraction (AF) of low energy electrons in small tissue-equivalent spherical volumes by Monte Carlo (MC) track structure simulation and assess the influence of phase (liquid water versus density-scaled water vapor) and of the continuous-slowing-down approximation (CSDA) used in semi-analytic calculations. An event-by-event MC code simulating the transport of electrons in both the vapor and liquid phase of water using appropriate electron-water interaction cross sections was used to quantify the energy deposition of low-energy electrons in spherical volumes. Semi-analytic calculations within the CSDA using a convolution integral of the Howell range-energy expressions are also presented for comparison. The AF for spherical volumes of radii from 10-1000 nm are presented for monoenergetic electrons over the energy range 100-10,000 eV and the two Auger-emitting radionuclides (125)I and (123)I. The MC calculated AF for the liquid phase are found to be smaller than those of the (density scaled) gas phase by up to 10-20% for the monoenergetic electrons and 10% for the two Auger-emitters. Differences between the liquid-phase MC results and the semi-analytic CSDA calculations are up to ∼ 55% for the monoenergetic electrons and up to ∼ 35% for the two Auger-emitters. Condensed-phase effects in the inelastic interaction of low-energy electrons with water have a noticeable but relatively small impact on the AF for the energy range and target sizes examined. Depending on the electron energies, the semi-analytic approach may lead to sizeable errors for target sizes with linear dimensions below 1 micron.

  17. MO-G-17A-06: Kernel Based Dosimetry for 90Y Microsphere Liver Therapy Using 90Y Bremsstrahlung SPECT/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikell, J; Siman, W; Kappadath, S

    2014-06-15

    Purpose: 90Y microsphere therapy in liver presents a situation where beta transport is dominant and the tissue is relatively homogenous. We compare voxel-based absorbed doses from a 90Y kernel to Monte Carlo (MC) using quantitative 90Y bremsstrahlung SPECT/CT as source distribution. Methods: Liver, normal liver, and tumors were delineated by an interventional radiologist using contrast-enhanced CT registered with 90Y SPECT/CT scans for 14 therapies. Right lung was segmented via region growing. The kernel was generated with 1.04 g/cc soft tissue for 4.8 mm voxel matching the SPECT. MC simulation materials included air, lung, soft tissue, and bone with varying densities.more » We report percent difference between kernel and MC (%Δ(K,MC)) for mean absorbed dose, D70, and V20Gy in total liver, normal liver, tumors, and right lung. We also report %Δ(K,MC) for heterogeneity metrics: coefficient of variation (COV) and D10/D90. The impact of spatial resolution (0, 10, 20 mm FWHM) and lung shunt fraction (LSF) (1,5,10,20%) on the accuracy of MC and kernel doses near the liver-lung interface was modeled in 1D. We report the distance from the interface where errors become <10% of unblurred MC as d10(side of interface, dose calculation, FWHM blurring, LSF). Results: The %Δ(K,MC) for mean, D70, and V20Gy in tumor and liver was <7% while right lung differences varied from 60–90%. The %Δ(K,MC) for COV was <4.8% for tumor and liver and <54% for the right lung. The %Δ(K,MC) for D10/D90 was <5% for 22/23 tumors. d10(liver,MC,10,1–20) awere <9mm and d10(liver,MC,20,1–20) awere <15mm; both agreed within 3mm to the kernel. d10(lung,MC,10,20), d10(lung,MC,10,1), d10(lung,MC,20,20), and d10(lung,MC,20,1) awere 6, 25, 15, and 34mm, respectively. Kernel calculations on blurred distributions in lung had errors > 10%. Conclusions: Liver and tumor voxel doses with 90Y kernel and MC agree within 7%. Large differences exist between the two methods in right lung. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA138986. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  18. Exact Analytic Result of Contact Value for the Density in a Modified Poisson-Boltzmann Theory of an Electrical Double Layer.

    PubMed

    Lou, Ping; Lee, Jin Yong

    2009-04-14

    For a simple modified Poisson-Boltzmann (SMPB) theory, taking into account the finite ionic size, we have derived the exact analytic expression for the contact values of the difference profile of the counterion and co-ion, as well as of the sum (density) and product profiles, near a charged planar electrode that is immersed in a binary symmetric electrolyte. In the zero ionic size or dilute limit, these contact values reduce to the contact values of the Poisson-Boltzmann (PB) theory. The analytic results of the SMPB theory, for the difference, sum, and product profiles were compared with the results of the Monte-Carlo (MC) simulations [ Bhuiyan, L. B.; Outhwaite, C. W.; Henderson, D. J. Electroanal. Chem. 2007, 607, 54 ; Bhuiyan, L. B.; Henderson, D. J. Chem. Phys. 2008, 128, 117101 ], as well as of the PB theory. In general, the analytic expression of the SMPB theory gives better agreement with the MC data than the PB theory does. For the difference profile, as the electrode charge increases, the result of the PB theory departs from the MC data, but the SMPB theory still reproduces the MC data quite well, which indicates the importance of including steric effects in modeling diffuse layer properties. As for the product profile, (i) it drops to zero as the electrode charge approaches infinity; (ii) the speed of the drop increases with the ionic size, and these behaviors are in contrast with the predictions of the PB theory, where the product is identically 1.

  19. Molecular simulation of simple fluids and polymers in nanoconfinement

    NASA Astrophysics Data System (ADS)

    Rasmussen, Christopher John

    Prediction of phase behavior and transport properties of simple fluids and polymers confined to nanoscale pores is important to a wide range of chemical and biochemical engineering processes. A practical approach to investigate nanoscale systems is molecular simulation, specifically Monte Carlo (MC) methods. One of the most challenging problems is the need to calculate chemical potentials in simulated phases. Through the seminal work of Widom, practitioners have a powerful method for calculating chemical potentials. Yet, this method fails for dense and inhomogeneous systems, as well as for complex molecules such as polymers. In this dissertation, the gauge cell MC method, which had previously been successfully applied to confined simple fluids, was employed and extended to investigate nanoscale fluids in several key areas. Firstly, the process of cavitation (the formation and growth of bubbles) during desorption of fluids from nanopores was investigated. The dependence of cavitation pressure on pore size was determined with gauge cell MC calculations of the nucleation barriers correlated with experimental data. Additional computational studies elucidated the role of surface defects and pore connectivity in the formation of cavitation bubbles. Secondly, the gauge cell method was extended to polymers. The method was verified against the literature results and found significantly more efficient. It was used to examine adsorption of polymers in nanopores. These results were applied to model the dynamics of translocation, the act of a polymer threading through a small opening, which is implicated in drug packaging and delivery, and DNA sequencing. Translocation dynamics was studied as diffusion along the free energy landscape. Thirdly, we show how computer simulation of polymer adsorption could shed light on the specifics of polymer chromatography, which is a key tool for the analysis and purification of polymers. The quality of separation depends on the physico-chemical mechanisms of polymer/pore interaction. We considered liquid chromatography at critical conditions, and calculated the dependence of the partition coefficient on chain length. Finally, solvent-gradient chromatography was modeled using a statistical model of polymer adsorption. A model for predicting separation of complex polymers (with functional groups or copolymers) was developed for practical use in chromatographic separations.

  20. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa

    2017-05-01

    We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.

Top