Sample records for dosimetry les codes

  1. A dosimetry study comparing NCS report-5, IAEA TRS-381, AAPM TG-51 and IAEA TRS-398 in three clinical electron beam energies

    NASA Astrophysics Data System (ADS)

    Palmans, Hugo; Nafaa, Laila; de Patoul, Nathalie; Denis, Jean-Marc; Tomsej, Milan; Vynckier, Stefaan

    2003-05-01

    New codes of practice for reference dosimetry in clinical high-energy photon and electron beams have been published recently, to replace the air kerma based codes of practice that have determined the dosimetry of these beams for the past twenty years. In the present work, we compared dosimetry based on the two most widespread absorbed dose based recommendations (AAPM TG-51 and IAEA TRS-398) with two air kerma based recommendations (NCS report-5 and IAEA TRS-381). Measurements were performed in three clinical electron beam energies using two NE2571-type cylindrical chambers, two Markus-type plane-parallel chambers and two NACP-02-type plane-parallel chambers. Dosimetry based on direct calibrations of all chambers in 60Co was investigated, as well as dosimetry based on cross-calibrations of plane-parallel chambers against a cylindrical chamber in a high-energy electron beam. Furthermore, 60Co perturbation factors for plane-parallel chambers were derived. It is shown that the use of 60Co calibration factors could result in deviations of more than 2% for plane-parallel chambers between the old and new codes of practice, whereas the use of cross-calibration factors, which is the first recommendation in the new codes, reduces the differences to less than 0.8% for all situations investigated here. The results thus show that neither the chamber-to-chamber variations, nor the obtained absolute dose values are significantly altered by changing from air kerma based dosimetry to absorbed dose based dosimetry when using calibration factors obtained from the Laboratory for Standard Dosimetry, Ghent, Belgium. The values of the 60Co perturbation factor for plane-parallel chambers (katt . km for the air kerma based and pwall for the absorbed dose based codes of practice) that are obtained from comparing the results based on 60Co calibrations and cross-calibrations are within the experimental uncertainties in agreement with the results from other investigators.

  2. Sci—Thur AM: YIS - 03: irtGPUMCD: a new GPU-calculated dosimetry code for {sup 177}Lu-octreotate radionuclide therapy of neuroendocrine tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montégiani, Jean-François; Gaudin, Émilie; Després, Philippe

    2014-08-15

    In peptide receptor radionuclide therapy (PRRT), huge inter-patient variability in absorbed radiation doses per administered activity mandates the utilization of individualized dosimetry to evaluate therapeutic efficacy and toxicity. We created a reliable GPU-calculated dosimetry code (irtGPUMCD) and assessed {sup 177}Lu-octreotate renal dosimetry in eight patients (4 cycles of approximately 7.4 GBq). irtGPUMCD was derived from a brachytherapy dosimetry code (bGPUMCD), which was adapted to {sup 177}Lu PRRT dosimetry. Serial quantitative single-photon emission computed tomography (SPECT) images were obtained from three SPECT/CT acquisitions performed at 4, 24 and 72 hours after {sup 177}Lu-octreotate administration, and registered with non-rigid deformation of CTmore » volumes, to obtain {sup 177}Lu-octreotate 4D quantitative biodistribution. Local energy deposition from the β disintegrations was assumed. Using Monte Carlo gamma photon transportation, irtGPUMCD computed dose rate at each time point. Average kidney absorbed dose was obtained from 1-cm{sup 3} VOI dose rate samples on each cortex, subjected to a biexponential curve fit. Integration of the latter time-dose rate curve yielded the renal absorbed dose. The mean renal dose per administered activity was 0.48 ± 0.13 Gy/GBq (range: 0.30–0.71 Gy/GBq). Comparison to another PRRT dosimetry code (VRAK: Voxelized Registration and Kinetics) showed fair accordance with irtGPUMCD (11.4 ± 6.8 %, range: 3.3–26.2%). These results suggest the possibility to use the irtGPUMCD code in order to personalize administered activity in PRRT. This could allow improving clinical outcomes by maximizing per-cycle tumor doses, without exceeding the tolerable renal dose.« less

  3. Analysis of dosimetry from the H.B. Robinson unit 2 pressure vessel benchmark using RAPTOR-M3G and ALPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, G.A.

    2011-07-01

    Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less

  4. A test of the IAEA code of practice for absorbed dose determination in photon and electron beams

    NASA Astrophysics Data System (ADS)

    Leitner, Arnold; Tiefenboeck, Wilhelm; Witzani, Josef; Strachotinsky, Christian

    1990-12-01

    The IAEA (International Atomic Energy Agency) code of practice TRS 277 gives recommendations for absorbed dose determination in high energy photon and electron beams based on the use of ionization chambers calibrated in terms of exposure of air kerma. The scope of the work was to test the code for cobalt 60 gamma radiation and for several radiation qualities at four different types of electron accelerators and to compare the ionization chamber dosimetry with ferrous sulphate dosimetry. The results show agreement between the two methods within about one per cent for all the investigated qualities. In addition the response of the TLD capsules of the IAEA/WHO TL dosimetry service was determined.

  5. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  6. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based onmore » Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)« less

  7. Probabilistic accident consequence uncertainty analysis -- Uncertainty assessment for internal dosimetry. Volume 2: Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.

    1998-04-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less

  8. Treating voxel geometries in radiation protection dosimetry with a patched version of the Monte Carlo codes MCNP and MCNPX.

    PubMed

    Burn, K W; Daffara, C; Gualdrini, G; Pierantoni, M; Ferrari, P

    2007-01-01

    The question of Monte Carlo simulation of radiation transport in voxel geometries is addressed. Patched versions of the MCNP and MCNPX codes are developed aimed at transporting radiation both in the standard geometry mode and in the voxel geometry treatment. The patched code reads an unformatted FORTRAN file derived from DICOM format data and uses special subroutines to handle voxel-to-voxel radiation transport. The various phases of the development of the methodology are discussed together with the new input options. Examples are given of employment of the code in internal and external dosimetry and comparisons with results from other groups are reported.

  9. Items Supporting the Hanford Internal Dosimetry Program Implementation of the IMBA Computer Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, Eugene H.; Bihl, Donald E.

    2008-01-07

    The Hanford Internal Dosimetry Program has adopted the computer code IMBA (Integrated Modules for Bioassay Analysis) as its primary code for bioassay data evaluation and dose assessment using methodologies of ICRP Publications 60, 66, 67, 68, and 78. The adoption of this code was part of the implementation plan for the June 8, 2007 amendments to 10 CFR 835. This information release includes action items unique to IMBA that were required by PNNL quality assurance standards for implementation of safety software. Copie of the IMBA software verification test plan and the outline of the briefing given to new users aremore » also included.« less

  10. FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George Rizeq; Janice West; Arnaldo Frydman

    Further development of a combustion Large Eddy Simulation (LES) code for the design of advanced gaseous combustion systems is described in this sixth quarterly report. CFD Research Corporation (CFDRC) is developing the LES module within the parallel, unstructured solver included in the commercial CFD-ACE+ software. In this quarter, in-situ adaptive tabulation (ISAT) for efficient chemical rate storage and retrieval was implemented and tested within the Linear Eddy Model (LEM). ISAT type 3 is being tested so that extrapolation can be performed and further improve the retrieval rate. Further testing of the LEM for subgrid chemistry was performed for parallel applicationsmore » and for multi-step chemistry. Validation of the software on backstep and bluff-body reacting cases were performed. Initial calculations of the SimVal experiment at Georgia Tech using their LES code were performed. Georgia Tech continues the effort to parameterize the LEM over composition space so that a neural net can be used efficiently in the combustion LES code. A new and improved Artificial Neural Network (ANN), with log-transformed output, for the 1-step chemistry was implemented in CFDRC's LES code and gave reasonable results. This quarter, the 2nd consortium meeting was held at CFDRC. Next quarter, LES software development and testing will continue. Alpha testing of the code will continue to be performed on cases of interest to the industrial consortium. Optimization of subgrid models will be pursued, particularly with the ISAT approach. Also next quarter, the demonstration of the neural net approach, for multi-step chemical kinetics speed-up in CFD-ACE+, will be accomplished.« less

  11. Evaluation of the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels using particle and heavy ion transport code system: PHITS.

    PubMed

    Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko

    2017-10-01

    We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.

    PubMed

    Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M

    2002-10-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.

  13. The internal dosimetry code PLEIADES.

    PubMed

    Fell, T P; Phipps, A W; Smith, T J

    2007-01-01

    The International Commission on Radiological Protection (ICRP) has published dose coefficients for the ingestion or inhalation of radionuclides in a series of reports covering intakes by workers and members of the public, including children and pregnant or lactating women. The calculation of these coefficients divides naturally into two distinct parts-the biokinetic and dosimetric. This paper describes in detail the methods used to solve the biokinetic problem in the generation of dose coefficients on behalf of the ICRP, as implemented in the Health Protection Agency's internal dosimetry code PLEIADES. A summary of the dosimetric treatment is included.

  14. Retrospective dosimetry analyses of reactor vessel cladding samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwood, L. R.; Soderquist, C. Z.; Fero, A. H.

    2011-07-01

    Reactor pressure vessel cladding samples for Ringhals Units 3 and 4 in Sweden were analyzed using retrospective reactor dosimetry techniques. The objective was to provide the best estimates of the neutron fluence for comparison with neutron transport calculations. A total of 51 stainless steel samples consisting of chips weighing approximately 100 to 200 mg were removed from selected locations around the pressure vessel and were sent to Pacific Northwest National Laboratory for analysis. The samples were fully characterized and analyzed for radioactive isotopes, with special interest in the presence of Nb-93m. The RPV cladding retrospective dosimetry results will be combinedmore » with a re-evaluation of the surveillance capsule dosimetry and with ex-vessel neutron dosimetry results to form a comprehensive 3D comparison of measurements to calculations performed with 3D deterministic transport code. (authors)« less

  15. Validating LES for Jet Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2011-01-01

    Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that result in having dreams come true. This paper primarily addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. It also addresses the latter problem in discussing what are relevant measures critical for aeroacoustics that should be used in validating LES codes. These new diagnostic techniques deliver measurements and flow statistics of increasing sophistication and capability, but what of their accuracy? And what are the measures to be used in validation? This paper argues that the issue of accuracy be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it is argued that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound.

  16. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.

  17. Comparison of four large-eddy simulation research codes and effects of model coefficient and inflow turbulence in actuator-line-based wind turbine modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Yilmaz, Ali Emre; ...

    2018-05-16

    Here, large-eddy simulation (LES) of a wind turbine under uniform inflow is performed using an actuator line model (ALM). Predictions from four LES research codes from the wind energy community are compared. The implementation of the ALM in all codes is similar and quantities along the blades are shown to match closely for all codes. The value of the Smagorinsky coefficient in the subgrid-scale turbulence model is shown to have a negligible effect on the time-averaged loads along the blades. Conversely, the breakdown location of the wake is strongly dependent on the Smagorinsky coefficient in uniform laminar inflow. Simulations aremore » also performed using uniform mean velocity inflow with added homogeneous isotropic turbulence from a public database. The time-averaged loads along the blade do not depend on the inflow turbulence. Moreover, and in contrast to the uniform inflow cases, the Smagorinsky coefficient has a negligible effect on the wake profiles. It is concluded that for LES of wind turbines and wind farms using ALM, careful implementation and extensive cross-verification among codes can result in highly reproducible predictions. Moreover, the characteristics of the inflow turbulence appear to be more important than the details of the subgrid-scale modeling employed in the wake, at least for LES of wind energy applications at the resolutions tested in this work.« less

  18. Comparison of four large-eddy simulation research codes and effects of model coefficient and inflow turbulence in actuator-line-based wind turbine modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Yilmaz, Ali Emre

    Here, large-eddy simulation (LES) of a wind turbine under uniform inflow is performed using an actuator line model (ALM). Predictions from four LES research codes from the wind energy community are compared. The implementation of the ALM in all codes is similar and quantities along the blades are shown to match closely for all codes. The value of the Smagorinsky coefficient in the subgrid-scale turbulence model is shown to have a negligible effect on the time-averaged loads along the blades. Conversely, the breakdown location of the wake is strongly dependent on the Smagorinsky coefficient in uniform laminar inflow. Simulations aremore » also performed using uniform mean velocity inflow with added homogeneous isotropic turbulence from a public database. The time-averaged loads along the blade do not depend on the inflow turbulence. Moreover, and in contrast to the uniform inflow cases, the Smagorinsky coefficient has a negligible effect on the wake profiles. It is concluded that for LES of wind turbines and wind farms using ALM, careful implementation and extensive cross-verification among codes can result in highly reproducible predictions. Moreover, the characteristics of the inflow turbulence appear to be more important than the details of the subgrid-scale modeling employed in the wake, at least for LES of wind energy applications at the resolutions tested in this work.« less

  19. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  20. ALGEBRA: ALgorithm for the heterogeneous dosimetry based on GEANT4 for BRAchytherapy.

    PubMed

    Afsharpour, H; Landry, G; D'Amours, M; Enger, S; Reniers, B; Poon, E; Carrier, J-F; Verhaegen, F; Beaulieu, L

    2012-06-07

    Task group 43 (TG43)-based dosimetry algorithms are efficient for brachytherapy dose calculation in water. However, human tissues have chemical compositions and densities different than water. Moreover, the mutual shielding effect of seeds on each other (interseed attenuation) is neglected in the TG43-based dosimetry platforms. The scientific community has expressed the need for an accurate dosimetry platform in brachytherapy. The purpose of this paper is to present ALGEBRA, a Monte Carlo platform for dosimetry in brachytherapy which is sufficiently fast and accurate for clinical and research purposes. ALGEBRA is based on the GEANT4 Monte Carlo code and is capable of handling the DICOM RT standard to recreate a virtual model of the treated site. Here, the performance of ALGEBRA is presented for the special case of LDR brachytherapy in permanent prostate and breast seed implants. However, the algorithm is also capable of handling other treatments such as HDR brachytherapy.

  1. Methodes de calcul des forces aerodynamiques pour les etudes des interactions aeroservoelastiques

    NASA Astrophysics Data System (ADS)

    Biskri, Djallel Eddine

    L'aeroservoelasticite est un domaine ou interagissent la structure flexible d'un avion, l'aerodynamique et la commande de vol. De son cote, la commande du vol considere l'avion comme une structure rigide et etudie l'influence du systeme de commande sur la dynamique de vol. Dans cette these, nous avons code trois nouvelles methodes d'approximation de forces aerodynamiques: Moindres carres corriges, Etat minimal corrige et Etats combines. Dans les deux premieres methodes, les erreurs d'approximation entre les forces aerodynamiques approximees par les methodes classiques et celles obtenues par les nouvelles methodes ont les memes formes analytiques que celles des forces aerodynamiques calculees par LS ou MS. Quant a la troisieme methode, celle-ci combine les formulations des forces approximees avec les methodes standards LS et MS. Les vitesses et frequences de battement et les temps d'executions calcules par les nouvelles methodes versus ceux calcules par les methodes classiques ont ete analyses.

  2. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation.

    PubMed

    Breton, S-P; Sumner, J; Sørensen, J N; Hansen, K S; Sarmast, S; Ivanell, S

    2017-04-13

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  3. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation

    PubMed Central

    Sumner, J.; Sørensen, J. N.; Hansen, K. S.; Sarmast, S.; Ivanell, S.

    2017-01-01

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265021

  4. Strategic Consolidation of Medical War Reserve Material (WRM) Equipment Unit Type Code (UTC) Assemblages

    DTIC Science & Technology

    2013-03-01

    anomalies such as listing 640 deployments in 2003 but only 11 for 2005 and 3 for 2008. The poor data quality was attributed to a lost hard drive 31...Dosimetry Equipment 9171 Expeditionary Dental Clinic 902P RAD/ NUC Dosimetry A ug Equipm ent 9171 High A ltitude A ir Dr op M ission Support 903A Oxygen

  5. IPEM guidelines on dosimeter systems for use as transfer instruments between the UK primary dosimetry standards laboratory (NPL) and radiotherapy centres1

    NASA Astrophysics Data System (ADS)

    Morgan, A. M.; Aird, E. G. A.; Aukett, R. J.; Duane, S.; Jenkins, N. H.; Mayles, W. P. M.; Moretti, C.; Thwaites, D. I.

    2000-09-01

    United Kingdom dosimetry codes of practice have traditionally specified one electrometer for use as a secondary standard, namely the Nuclear Enterprises (NE) 2560 NPL secondary standard therapy level exposure meter. The NE2560 will become obsolete in the foreseeable future. This report provides guidelines to assist physicists following the United Kingdom dosimetry codes of practice in the selection of an electrometer to replace the NE2560 when necessary. Using an internationally accepted standard (BS EN 60731:1997) as a basis, estimated error analyses demonstrate that the uncertainty (one standard deviation) in a charge measurement associated with the NE2560 alone is approximately 0.3% under specified conditions. Following a review of manufacturers' literature, it is considered that modern electrometers should be capable of equalling this performance. Additional constructural and operational requirements not specified in the international standard but considered essential in a modern electrometer to be used as a secondary standard are presented.

  6. Developpement et validation d'un outil base sur l'acoustique geometrique pour le diagnostic du bruit de nacelle

    NASA Astrophysics Data System (ADS)

    Minard, Benoit

    De nos jours, la problématique du bruit généré par les avions est devenue un point de développement important dans le domaine de l'aéronautique. C'est ainsi que de nombreuses études sont faites dans le domaine et une première approche consiste à modéliser de façon numérique ce bruit de manière à réduire de façon conséquente les coûts lors de la conception. C'est dans ce contexte qu'un motoriste a demandé à l'université de Sherbrooke, et plus particulièrement au groupe d'acoustique de l'Université de Sherbrooke (GAUS), de développer un outil de calcul de la propagation des ondes acoustiques dans les nacelles mais aussi pour l'étude des effets d'installation. Cet outil de prédiction leur permet de réaliser des études afin d'optimiser les traitements acoustiques (« liners »), la géométrie de ces nacelles pour des études portant sur l'intérieur de la nacelle et des études de positionnement des moteurs et de design pour les effets d'installation. L'objectif de ce projet de maîtrise était donc de poursuivre le travail réalisé par [gousset, 2011] sur l'utilisation d'une méthode de lancer de rayons pour l'étude des effets d'installation des moteurs d'avion. L'amélioration du code, sa rapidité, sa fiabilité et sa généralité étaient les objectifs principaux. Le code peut être utilisé avec des traitements acoustiques de surfaces («liners») et peut prendre en compte le phénomène de la diffraction par les arêtes et enfin peut être utilisé pour réaliser des études dans des environnements complexes tels que les nacelles d'avion. Le code développé fonctionne en 3D et procéde en 3 étapes : (1) Calcul des faisceaux initiaux (division d'une sphère, demi-sphère, maillage des surfaces de la géométrie) (2) Propagation des faisceaux dans l'environnement d'étude : calcul de toutes les caractéristiques des rayons convergents (amplitude, phase, nombre de réflexions, ...) (3) Reconstruction du champ de pression en un ou plusieurs points de l'espace à partir de rayons convergents (sommation des contributions de chaque rayon) : sommation cohérente. Le code (GA3DP) permet de prendre en compte les traitements de surface des parois, la directivité de la source, l'atténuation atmosphérique et la diffraction d'ordre 1. Le code a été validé en utilisant différentes méthodes telles que la méthode des sources-images, la méthode d'analyse modale ou encore la méthode des éléments finis de frontière. Un module Matlab a été créé spécialement pour l'étude des effets d'installation et intégré au code existant chez Pratt & Whitney Canada. Mots-clés : Acoustique géométrique - Ray-Tracing - Lancer de faisceaux - Diffraction - Sommation Cohérente - Niveau de Pression.

  7. Translation into French of: “Changes to publication requirements made at the XVIII International Botanical Congress in Melbourne – what does e-publication mean for you?”. Translated by Christian Feuillet and Valéry Malécot Changements des conditions requises pour la publication faits au XVIII e Congrès International de Botanique à Melbourne – qu’est-ce que la publication électronique représente pour vous?

    PubMed Central

    Knapp, Sandra; McNeill, John; Turland, Nicholas J.

    2011-01-01

    Résumé Les changements au Code International de Nomenclature Botanique sont décidés tous les 6 ans aux Sections de Nomenclature associées aux Congrès Internationaux de Botanique (CIB). Le XVIIIe CIB se tenait à Melbourne, Australie; la Section de Nomenclature s’est réunie les 18-22 juillet 2011 et ses décisions ont été acceptées par le Congrès en session plénière le 30 juillet. Suite à cette réunion, plusieurs modifications importantes ont été apportées au Code et vont affecter la publication de nouveaux noms. Deux de ces changements prendront effet le 1er janvier 2012, quelques mois avant que le Code de Melbourne soit publié. Les documents électroniques publiés en ligne en ‘Portable Document Format’ (PDF) avec un ‘International Standard Serial Number’ (ISSN) ou un ‘International Standard Book Number’ (ISBN) constitueront une publication effective, et l’exigence d’une description ou d’une diagnose en latin pour les noms des nouveaux taxa sera changée en l’exigence d’une description ou d’une diagnose en latin ou en anglais. De plus, à partir du 1er janvier 2013, les noms nouveaux des organismes traités comme champignons devront, pour que la publication soit valide, inclure dans le protologue (tous ce qui est associé au nom au moment de la publication valide) la citation d’un identifiant (‘identifier’) fourni par un dépôt reconnu (tel MycoBank). Une ébauche des nouveaux articles concernant la publication électronique est fournie et des conseils de bon usage sont esquissés. Pour encourager la diffusion des changements adoptés au Code International de Nomenclature pour les algues, les champignons et les plantes, cet article sera publié dans BMC Evolutionary Biology, Botanical Journal of the Linnean Society, Brittonia, Cladistics, MycoKeys, Mycotaxon, New Phytologist, North American Fungi, Novon, Opuscula Philolichenum, PhytoKeys, Phytoneuron, Phytotaxa, Plant Diversity and Resources, Systematic Botany et Taxon. PMID:22287925

  8. Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca; Anderson, Stanwood L.

    2009-08-01

    The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.

  9. Evaluation of Subgrid-Scale Models for Large Eddy Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Blaisdell, Gregory A.

    1996-01-01

    The objective of this project was to evaluate and develop subgrid-scale (SGS) turbulence models for large eddy simulations (LES) of compressible flows. During the first phase of the project results from LES using the dynamic SGS model were compared to those of direct numerical simulations (DNS) of compressible homogeneous turbulence. The second phase of the project involved implementing the dynamic SGS model in a NASA code for simulating supersonic flow over a flat-plate. The model has been successfully coded and a series of simulations has been completed. One of the major findings of the work is that numerical errors associated with the finite differencing scheme used in the code can overwhelm the SGS model and adversely affect the LES results. Attached to this overview are three submitted papers: 'Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence'; 'The effect of the formulation of nonlinear terms on aliasing errors in spectral methods'; and 'Large-Eddy Simulation of a Spatially Evolving Compressible Boundary Layer Flow'.

  10. Application des codes de Monte Carlo à la radiothérapie par rayonnement à faible TEL

    NASA Astrophysics Data System (ADS)

    Marcié, S.

    1998-04-01

    In radiation therapy, there is low LET rays: photons of 60Co, photons and electrons to 4 at 25 MV created in a linac, photons 137Cs, of 192Ir and of 125I. To know the most exactly possible the dose to the tissu by this rays, software and measurements are used. With the development of the power and the capacity of computers, the application of Monte Carlo codes expand to the radiation therapy which have permitted to better determine effects of rays and spectra, to explicit parameters used in dosimetric calculation, to verify algorithms , to study measuremtents systems and phantoms, to calculate the dose in inaccessible points and to consider the utilization of new radionuclides. En Radiothérapie, il existe une variété, de rayonnements ? faible TLE : photons du cobalt 60, photons et ,électron de 4 à? 25 MV générés dans des accélérateurs linéaires, photons du césium 137, de l'iridium 192 et de l'iode 125. Pour connatre le plus exactement possible la dose délivrée aux tissus par ces rayonnements, des logiciels sont utilisés ainsi que des instruments de mesures. Avec le développement de la puissance et de la capacité, des calculateurs, l'application des codes de Monte Carlo s'est ,étendue ? la Radiothérapie ce qui a permis de mieux cerner les effets des rayonnements, déterminer les spectres, préciser les valeurs des paramètres utilisés dans les calculs dosimétriques, vérifier les algorithmes, ,étudier les systèmes de mesures et les fantomes utilisés, calculer la dose en des points inaccessibles ?à la mesure et envisager l'utilisation de nouveaux radio,éléments.

  11. Experimental verification of bremsstrahlung production and dosimetry predictions for 15.5 MeV electrons

    NASA Astrophysics Data System (ADS)

    Sanford, T. W. L.; Beutler, D. E.; Halbleib, J. A.; Knott, D. P.

    1991-12-01

    The radiation produced by a 15.5-MeV monoenergetic electron beam incident on optimized and nonoptimized bremsstrahlung targets is characterized using the ITS Monte Carlo code and measurements with equilibrated and nonequilibrated TLD dosimetry. Comparisons between calculations and measurements verify the calculations and demonstrate that the code can be used to predict both bremsstrahlung production and TLD response for radiation fields that are characteristic of those produced by pulsed simulators of gamma rays. The comparisons provide independent confirmation of the validity of the TLD calibration for photon fields characteristic of gamma-ray simulators. The empirical Martin equation, which is often used to calculate radiation dose from optimized bremsstrahlung targets, is examined, and its range of validity is established.

  12. Validating LES for Jet Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Bridges, James; Wernet, Mark P.

    2011-01-01

    Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that are produced. This paper addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. This paper argues that the issue of accuracy of the experimental measurements be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it argues that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound, such as two-point space-time velocity correlations. A brief review of data sources available is presented along with examples illustrating cross-facility and internal quality checks required of the data before it should be accepted for validation of LES.

  13. Dosimetry of Al2O3 optically stimulated luminescent dosimeter at high energy photons and electrons

    NASA Astrophysics Data System (ADS)

    Yusof, M. F. Mohd; Joohari, N. A.; Abdullah, R.; Shukor, N. S. Abd; Kadir, A. B. Abd; Isa, N. Mohd

    2018-01-01

    The linearity of Al2O3 OSL dosimeters (OSLD) were evaluated for dosimetry works in clinical photons and electrons. The measurements were made at a reference depth of Zref according to IAEA TRS 398:2000 codes of practice at 6 and 10 MV photons and 6 and 9 MeV electrons. The measured dose was compared to the thermoluminescence dosimeters (TLD) and ionization chamber commonly used for dosimetry works for higher energy photons and electrons. The results showed that the measured dose in OSL dosimeters were in good agreement with the reported by the ionization chamber in both high energy photons and electrons. A reproducibility test also reported excellent consistency of readings with the OSL at similar energy levels. The overall results confirmed the suitability of OSL dosimeters for dosimetry works involving high energy photons and electrons in radiotherapy.

  14. Dosimetry of gamma chamber blood irradiator using PAGAT gel dosimeter and Monte Carlo simulations

    PubMed Central

    Mohammadyari, Parvin; Zehtabian, Mehdi; Sina, Sedigheh; Tavasoli, Ali Reza

    2014-01-01

    Currently, the use of blood irradiation for inactivating pathogenic microbes in infected blood products and preventing graft‐versus‐host disease (GVHD) in immune suppressed patients is greater than ever before. In these systems, dose distribution and uniformity are two important concepts that should be checked. In this study, dosimetry of the gamma chamber blood irradiator model Gammacell 3000 Elan was performed by several dosimeter methods including thermoluminescence dosimeters (TLD), PAGAT gel dosimetry, and Monte Carlo simulations using MCNP4C code. The gel dosimeter was put inside a glass phantom and the TL dosimeters were placed on its surface, and the phantom was then irradiated for 5 min and 27 sec. The dose values at each point inside the vials were obtained from the magnetic resonance imaging of the phantom. For Monte Carlo simulations, all components of the irradiator were simulated and the dose values in a fine cubical lattice were calculated using tally F6. This study shows that PAGAT gel dosimetry results are in close agreement with the results of TL dosimetry, Monte Carlo simulations, and the results given by the vendor, and the percentage difference between the different methods is less than 4% at different points inside the phantom. According to the results obtained in this study, PAGAT gel dosimetry is a reliable method for dosimetry of the blood irradiator. The major advantage of this kind of dosimetry is that it is capable of 3D dose calculation. PACS number: 87.53.Bn PMID:24423829

  15. Some Progress in Large-Eddy Simulation using the 3-D Vortex Particle Method

    NASA Technical Reports Server (NTRS)

    Winckelmans, G. S.

    1995-01-01

    This two-month visit at CTR was devoted to investigating possibilities in LES modeling in the context of the 3-D vortex particle method (=vortex element method, VEM) for unbounded flows. A dedicated code was developed for that purpose. Although O(N(sup 2)) and thus slow, it offers the advantage that it can easily be modified to try out many ideas on problems involving up to N approx. 10(exp 4) particles. Energy spectrums (which require O(N(sup 2)) operations per wavenumber) are also computed. Progress was realized in the following areas: particle redistribution schemes, relaxation schemes to maintain the solenoidal condition on the particle vorticity field, simple LES models and their VEM extension, possible new avenues in LES. Model problems that involve strong interaction between vortex tubes were computed, together with diagnostics: total vorticity, linear and angular impulse, energy and energy spectrum, enstrophy. More work is needed, however, especially regarding relaxation schemes and further validation and development of LES models for VEM. Finally, what works well will eventually have to be incorporated into the fast parallel tree code.

  16. Commissioning dosimetry and in situ dose mapping of a semi-industrial Cobalt-60 gamma-irradiation facility using Fricke and Ceric-cerous dosimetry system and comparison with Monte Carlo simulation data

    NASA Astrophysics Data System (ADS)

    Mortuza, Md Firoz; Lepore, Luigi; Khedkar, Kalpana; Thangam, Saravanan; Nahar, Arifatun; Jamil, Hossen Mohammad; Bandi, Laxminarayan; Alam, Md Khorshed

    2018-03-01

    Characterization of a 90 kCi (3330 TBq), semi-industrial, cobalt-60 gamma irradiator was performed by commissioning dosimetry and in-situ dose mapping experiments with Ceric-cerous and Fricke dosimetry systems. Commissioning dosimetry was carried out to determine dose distribution pattern of absorbed dose in the irradiation cell and products. To determine maximum and minimum absorbed dose, overdose ratio and dwell time of the tote boxes, homogeneous dummy product (rice husk) with a bulk density of 0.13 g/cm3 were used in the box positions of irradiation chamber. The regions of minimum absorbed dose of the tote boxes were observed in the lower zones of middle plane and maximum absorbed doses were found in the middle position of front plane. Moreover, as a part of dose mapping, dose rates in the wall positions and some selective strategic positions were also measured to carry out multiple irradiation program simultaneously, especially for low dose research irradiation program. In most of the cases, Monte Carlo simulation data, using Monte Carlo N-Particle eXtended code version MCNPX 2.7., were found to be in congruence with experimental values obtained from Ceric-cerous and Fricke dosimetry; however, in close proximity positions from the source, the dose rate variation between chemical dosimetry and MCNP was higher than distant positions.

  17. MAGIC-f Gel in Nuclear Medicine Dosimetry: study in an external beam of Iodine-131

    NASA Astrophysics Data System (ADS)

    Schwarcke, M.; Marques, T.; Garrido, C.; Nicolucci, P.; Baffa, O.

    2010-11-01

    MAGIC-f gel applicability in Nuclear Medicine dosimetry was investigated by exposure to a 131I source. Calibration was made to provide known absorbed doses in different positions around the source. The absorbed dose in gel was compared with a Monte Carlo Simulation using PENELOPE code and a thermoluminescent dosimetry (TLD). Using MRI analysis for the gel a R2-dose sensitivity of 0.23 s-1Gy-1was obtained. The agreement between dose-distance curves obtained with Monte Carlo simulation and TLD was better than 97% and for MAGIC-f and TLD was better than 98%. The results show the potential of polymer gel for application in nuclear medicine where three dimensional dose distribution is demanded.

  18. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  19. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  20. A Monte Carlo calculation model of electronic portal imaging device for transit dosimetry through heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu; Kim, Jong Oh

    2016-05-15

    Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate themore » model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously investigated for transit dosimetry.« less

  1. Un accumulateur echangeur de chaleur hybride pour la gestion simultanee des energies solaire et electrique

    NASA Astrophysics Data System (ADS)

    Ait Hammou, Zouhair

    Cette etude porte sur la conception d'un accumulateur echangeur de chaleur hybride (AECH) pour la gestion simultanee des energies solaire et electrique. Un modele mathematique reposant sur les equations de conservation de la quantite d'energie est expose. Il est developpe pour tester differents materiaux de stockage, entre autres, les materiaux a changement de phase (solide/liquide) et les materiaux de stockage sensible. Un code de calcul est mis en eeuvre sur ordinateur, puis valide a l'aide des resultats analytiques et numeriques de la litterature. En parallele, un prototype experimental a echelle reduite est concu au laboratoire afin de valider le code de calcul. Des simulations sont effectuees pour etudier les effets des parametres de conception et des materiaux de stockage sur le comportement thermique de l'AECH et sur la consommation d'energie electrique. Les resultats des simulations sur quatre mois d'hiver montrent que la paraffine n-octadecane et l'acide caprique sont deux candidats souhaitables pour le stockage d'energie destine au chauffage des habitats. L'utilisation de ces deux materiaux dans l'AECH permet de reduire la consommation d'energie electrique de 32% et d'aplanir le probleme de pointe electrique puisque 90% de l'energie electrique est consommee durant les heures creuses. En plus, en adoptant un tarif preferentiel, le calcul des couts lies a la consommation d'energie electrique montre que le consommateur adoptant ce systeme beneficie d'une reduction de 50% de la facture d'electricite.

  2. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Comparison of calculated beta- and gamma-ray doses after the Fukushima accident with data from single-grain luminescence retrospective dosimetry of quartz inclusions in a brick sample

    PubMed Central

    Endo, Satoru; Fujii, Keisuke; Kajimoto, Tsuyoshi; Tanaka, Kenichi; Stepanenko, Valeriy; Kolyzhenkov, Timofey; Petukhov, Aleksey; Akhmedova, Umukusum; Bogacheva, Viktoriia

    2018-01-01

    Abstract To estimate the beta- and gamma-ray doses in a brick sample taken from Odaka, Minami-Soma City, Fukushima Prefecture, Japan, a Monte Carlo calculation was performed with Particle and Heavy Ion Transport code System (PHITS) code. The calculated results were compared with data obtained by single-grain retrospective luminescence dosimetry of quartz inclusions in the brick sample. The calculated result agreed well with the measured data. The dose increase measured at the brick surface was explained by the beta-ray contribution, and the slight slope in the dose profile deeper in the brick was due to the gamma-ray contribution. The skin dose was estimated from the calculated result as 164 mGy over 3 years at the sampling site. PMID:29385528

  4. Comparison of calculated beta- and gamma-ray doses after the Fukushima accident with data from single-grain luminescence retrospective dosimetry of quartz inclusions in a brick sample.

    PubMed

    Endo, Satoru; Fujii, Keisuke; Kajimoto, Tsuyoshi; Tanaka, Kenichi; Stepanenko, Valeriy; Kolyzhenkov, Timofey; Petukhov, Aleksey; Akhmedova, Umukusum; Bogacheva, Viktoriia

    2018-05-01

    To estimate the beta- and gamma-ray doses in a brick sample taken from Odaka, Minami-Soma City, Fukushima Prefecture, Japan, a Monte Carlo calculation was performed with Particle and Heavy Ion Transport code System (PHITS) code. The calculated results were compared with data obtained by single-grain retrospective luminescence dosimetry of quartz inclusions in the brick sample. The calculated result agreed well with the measured data. The dose increase measured at the brick surface was explained by the beta-ray contribution, and the slight slope in the dose profile deeper in the brick was due to the gamma-ray contribution. The skin dose was estimated from the calculated result as 164 mGy over 3 years at the sampling site.

  5. Brachytherapy dosimetry of 125I and 103Pd sources using an updated cross section library for the MCNP Monte Carlo transport code.

    PubMed

    Bohm, Tim D; DeLuca, Paul M; DeWerd, Larry A

    2003-04-01

    Permanent implantation of low energy (20-40 keV) photon emitting radioactive seeds to treat prostate cancer is an important treatment option for patients. In order to produce accurate implant brachytherapy treatment plans, the dosimetry of a single source must be well characterized. Monte Carlo based transport calculations can be used for source characterization, but must have up to date cross section libraries to produce accurate dosimetry results. This work benchmarks the MCNP code and its photon cross section library for low energy photon brachytherapy applications. In particular, we calculate the emitted photon spectrum, air kerma, depth dose in water, and radial dose function for both 125I and 103Pd based seeds and compare to other published results. Our results show that MCNP's cross section library differs from recent data primarily in the photoelectric cross section for low energies and low atomic number materials. In water, differences as large as 10% in the photoelectric cross section and 6% in the total cross section occur at 125I and 103Pd photon energies. This leads to differences in the dose rate constant of 3% and 5%, and differences as large as 18% and 20% in the radial dose function for the 125I and 103Pd based seeds, respectively. Using a partially updated photon library, calculations of the dose rate constant and radial dose function agree with other published results. Further, the use of the updated photon library allows us to verify air kerma and depth dose in water calculations performed using MCNP's perturbation feature to simulate updated cross sections. We conclude that in order to most effectively use MCNP for low energy photon brachytherapy applications, we must update its cross section library. Following this update, the MCNP code system will be a very effective tool for low energy photon brachytherapy dosimetry applications.

  6. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botta, F; Di Dia, A; Pedroli, G

    The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK),more » quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9·X90, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution.Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less

  7. Development and validation of a GEANT4 radiation transport code for CT dosimetry

    PubMed Central

    Carver, DE; Kost, SD; Fernald, MJ; Lewis, KG; Fraser, ND; Pickens, DR; Price, RR; Stabin, MG

    2014-01-01

    We have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate our simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air, a standard 16-cm acrylic head phantom, and a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of our Monte Carlo simulations. We found that simulated and measured CTDI values were within an overall average of 6% of each other. PMID:25706135

  8. Development and validation of a GEANT4 radiation transport code for CT dosimetry.

    PubMed

    Carver, D E; Kost, S D; Fernald, M J; Lewis, K G; Fraser, N D; Pickens, D R; Price, R R; Stabin, M G

    2015-04-01

    The authors have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate their simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air with a standard 16-cm acrylic head phantom and with a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of the Monte Carlo simulations. It was found that simulated and measured CTDI values were within an overall average of 6% of each other.

  9. Developpement d'une methode de Monte Carlo dependante du temps et application au reacteur de type CANDU-6

    NASA Astrophysics Data System (ADS)

    Mahjoub, Mehdi

    La resolution de l'equation de Boltzmann demeure une etape importante dans la prediction du comportement d'un reacteur nucleaire. Malheureusement, la resolution de cette equation presente toujours un defi pour une geometrie complexe (reacteur) tout comme pour une geometrie simple (cellule). Ainsi, pour predire le comportement d'un reacteur nucleaire,un schema de calcul a deux etapes est necessaire. La premiere etape consiste a obtenir les parametres nucleaires d'une cellule du reacteur apres une etape d'homogeneisation et condensation. La deuxieme etape consiste en un calcul de diffusion pour tout le reacteur en utilisant les resultats de la premiere etape tout en simplifiant la geometrie du reacteur a un ensemble de cellules homogenes le tout entoure de reflecteur. Lors des transitoires (accident), ces deux etapes sont insuffisantes pour pouvoir predire le comportement du reacteur. Comme la resolution de l'equation de Boltzmann dans sa forme dependante du temps presente toujours un defi de taille pour tous types de geometries,un autre schema de calcul est necessaire. Afin de contourner cette difficulte, l'hypothese adiabatique est utilisee. Elle se concretise en un schema de calcul a quatre etapes. La premiere et deuxieme etapes demeurent les memes pour des conditions nominales du reacteur. La troisieme etape se resume a obtenir les nouvelles proprietes nucleaires de la cellule a la suite de la perturbation pour les utiliser, au niveau de la quatrieme etape, dans un nouveau calcul de reacteur et obtenir l'effet de la perturbation sur le reacteur. Ce projet vise a verifier cette hypothese. Ainsi, un nouveau schema de calcul a ete defini. La premiere etape de ce projet a ete de creer un nouveau logiciel capable de resoudre l'equation de Boltzmann dependante du temps par la methode stochastique Monte Carlo dans le but d'obtenir des sections efficaces qui evoluent dans le temps. Ce code a ete utilise pour simuler un accident LOCA dans un reacteur nucleaire de type CANDU-6. Les sections efficaces dependantes du temps ont ete par la suite utilisees dans un calcul de diffusion espace-temps pour un reacteur CANDU-6 subissant un accident de type LOCA affectant la moitie du coeur afin d'observer son comportement durant toutes les phases de la perturbation. Dans la phase de developpement, nous avons choisi de demarrer avec le code OpenMC, developpe au MIT,comme plateforme initiale de developpement. L'introduction et le traitement des neutrons retardes durant la simulation ont presente un grand defi a surmonter. Il est important de noter que le code developpe utilisant la methode Monte Carlo peut etre utilise a grande echelle pour la simulation de tous les types des reacteurs nucleaires si les supports informatiques sont disponibles.

  10. Subgrid Combustion Modeling for the Next Generation National Combustion Code

    NASA Technical Reports Server (NTRS)

    Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher

    2003-01-01

    In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.

  11. Validation of a personalized dosimetric evaluation tool (Oedipe) for targeted radiotherapy based on the Monte Carlo MCNPX code

    NASA Astrophysics Data System (ADS)

    Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.

    2006-02-01

    Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.

  12. Detour factors in water and plastic phantoms and their use for range and depth scaling in electron-beam dosimetry.

    PubMed

    Fernández-Varea, J M; Andreo, P; Tabata, T

    1996-07-01

    Average penetration depths and detour factors of 1-50 MeV electrons in water and plastic materials have been computed by means of analytical calculation, within the continuous-slowing-down approximation and including multiple scattering, and using the Monte Carlo codes ITS and PENELOPE. Results are compared to detour factors from alternative definitions previously proposed in the literature. Different procedures used in low-energy electron-beam dosimetry to convert ranges and depths measured in plastic phantoms into water-equivalent ranges and depths are analysed. A new simple and accurate scaling method, based on Monte Carlo-derived ratios of average electron penetration depths and thus incorporating the effect of multiple scattering, is presented. Data are given for most plastics used in electron-beam dosimetry together with a fit which extends the method to any other low-Z plastic material. A study of scaled depth-dose curves and mean energies as a function of depth for some plastics of common usage shows that the method improves the consistency and results of other scaling procedures in dosimetry with electron beams at therapeutic energies.

  13. Modèle tridimensionnel pour coupler les équations magnétiques et électriques dans le cas de la magnétostatique

    NASA Astrophysics Data System (ADS)

    Piriou, F.; Razek, A.

    1991-03-01

    In this paper a 3D model for coupling of magnetic and electric equations is presented. The magnetic equations are solved with the help of finite element method using the magnetic vector potential formulation. To take into account the effects of magnetic saturation we use the Newton-Raphson algorithm. We develop the analysis permitting the coupling of magnetic and electric equations to obtain a difrerential system equations which can be solved with numerical integration. As example we model an iron core coil and the validity of our model is verified by a comparison of the obtained results with an analytical solution and a 2D code calculation. Dans cet article est présenté un modèle 3D qui permet de coupler les équations magnétiques et électriques. Les équations magnétiques sont résolues à l'aide de la méthode des éléments finis en utilisant une formulation en potentiel vecteur magnétique. Dans le modèle proposé les effets de la saturation du circuit magnétique sont pris en compte en utilisant l'algorithme de Newton-Raphson. On montre comment relier les équations magnétiques avec celles du circuit électrique pour aboutir à un système d'équations différentielles que l'on résout avec une intégration numérique. A titre d'exemple on modélise une bobine à noyau ferromagnétique et pour montrer la validité du modèle on compare les résultats obtenus avec une solution analytique et un code de calcul 2D.

  14. CERN IRRADIATION FACILITIES.

    PubMed

    Pozzi, Fabio; Garcia Alia, Ruben; Brugger, Markus; Carbonez, Pierre; Danzeca, Salvatore; Gkotse, Blerina; Richard Jaekel, Martin; Ravotti, Federico; Silari, Marco; Tali, Maris

    2017-09-28

    CERN provides unique irradiation facilities for applications in dosimetry, metrology, intercomparison of radiation protection devices, benchmark of Monte Carlo codes and radiation damage studies to electronics. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Comparison of two LES codes for wind turbine wake studies

    NASA Astrophysics Data System (ADS)

    Sarlak, H.; Pierella, F.; Mikkelsen, R.; Sørensen, J. N.

    2014-06-01

    For the third time a blind test comparison in Norway 2013, was conducted comparing numerical simulations for the rotor Cp and Ct and wake profiles with the experimental results. As the only large eddy simulation study among participants, results of the Technical University of Denmark (DTU) using their in-house CFD solver, EllipSys3D, proved to be more reliable among the other models for capturing the wake profiles and the turbulence intensities downstream the turbine. It was therefore remarked in the workshop to investigate other LES codes to compare their performance with EllipSys3D. The aim of this paper is to investigate on two CFD solvers, the DTU's in-house code, EllipSys3D and the open-sourse toolbox, OpenFoam, for a set of actuator line based LES computations. Two types of simulations are performed: the wake behind a signle rotor and the wake behind a cluster of three inline rotors. Results are compared in terms of velocity deficit, turbulence kinetic energy and eddy viscosity. It is seen that both codes predict similar near-wake flow structures with the exception of OpenFoam's simulations without the subgrid-scale model. The differences begin to increase with increasing the distance from the upstream rotor. From the single rotor simulations, EllipSys3D is found to predict a slower wake recovery in the case of uniform laminar flow. From the 3-rotor computations, it is seen that the difference between the codes is smaller as the disturbance created by the downstream rotors causes break down of the wake structures and more homogenuous flow structures. It is finally observed that OpenFoam computations are more sensitive to the SGS models.

  16. Monte Carol-Based Dosimetry of Beta-Emitters for Intravascular Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, C.K.

    2002-06-25

    Monte Carlo simulations for radiation dosimetry and the experimental verifications of the simulations have been developed for the treatment geometry of intravascular brachytherapy, a form of radionuclide therapy for occluded coronary disease (restenosis). Monte Carlo code, MCNP4C, has been used to calculate the radiation dose from the encapsulated array of B-emitting seeds (Sr/Y-source train). Solid water phantoms have been fabricated to measure the dose on the radiochromic films that were exposed to the beta source train for both linear and curved coronary vessel geometries. While the dose difference for the 5-degree curved vessel at the prescription point of f+2.0 mmmore » is within the 10% guideline set by the AAPM, however, the difference increased dramatically to 16.85% for the 10-degree case which requires additional adjustment for the acceptable dosimetry planning. The experimental dose measurements agree well with the simulation results« less

  17. Current status of kilovoltage (kV) radiotherapy in the UK: installed equipment, clinical workload, physics quality control and radiation dosimetry.

    PubMed

    Palmer, Antony L; Pearson, Michael; Whittard, Paul; McHugh, Katie E; Eaton, David J

    2016-12-01

    To assess the status and practice of kilovoltage (kV) radiotherapy in the UK. 96% of the radiotherapy centres in the UK responded to a comprehensive survey. An analysis of the installed equipment base, patient numbers, clinical treatment sites, quality control (QC) testing and radiation dosimetry processes were undertaken. 73% of UK centres have at least one kV treatment unit, with 58 units installed across the UK. Although 35% of units are over 10 years old, 39% units have been installed in the last 5 years. Approximately 6000 patients are treated with kV units in the UK each year, the most common site (44%) being basal cell carcinoma. A benchmark of QC practice in the UK is presented, against which individual centres can compare their procedures, frequency of testing and acceptable tolerance values. We propose the use of internal "notification" and "suspension" levels for analysis. All surveyed centres were using recommended Codes of Practice for kV dosimetry in the UK; approximately the same number using in-air and in-water methodologies for medium energy, with two-thirds of all centres citing "clinical relevance" as the reason for choice of code. 64% of centres had hosted an external dosimetry audit within the last 3 years, with only one centre never being independently audited. The majority of centres use locally measured applicator factors and published backscatter factors for treatments. Monitor unit calculations are performed using software in only 36% of centres. A comprehensive review of current kV practice in the UK is presented. Advances in knowledge: Data and discussion on contemporary kV radiotherapy in the UK, with a particular focus on physics aspects.

  18. Current status of kilovoltage (kV) radiotherapy in the UK: installed equipment, clinical workload, physics quality control and radiation dosimetry

    PubMed Central

    Pearson, Michael; Whittard, Paul; McHugh, Katie E; Eaton, David J

    2016-01-01

    Objective: To assess the status and practice of kilovoltage (kV) radiotherapy in the UK. Methods: 96% of the radiotherapy centres in the UK responded to a comprehensive survey. An analysis of the installed equipment base, patient numbers, clinical treatment sites, quality control (QC) testing and radiation dosimetry processes were undertaken. Results: 73% of UK centres have at least one kV treatment unit, with 58 units installed across the UK. Although 35% of units are over 10 years old, 39% units have been installed in the last 5 years. Approximately 6000 patients are treated with kV units in the UK each year, the most common site (44%) being basal cell carcinoma. A benchmark of QC practice in the UK is presented, against which individual centres can compare their procedures, frequency of testing and acceptable tolerance values. We propose the use of internal “notification” and “suspension” levels for analysis. All surveyed centres were using recommended Codes of Practice for kV dosimetry in the UK; approximately the same number using in-air and in-water methodologies for medium energy, with two-thirds of all centres citing “clinical relevance” as the reason for choice of code. 64% of centres had hosted an external dosimetry audit within the last 3 years, with only one centre never being independently audited. The majority of centres use locally measured applicator factors and published backscatter factors for treatments. Monitor unit calculations are performed using software in only 36% of centres. Conclusion: A comprehensive review of current kV practice in the UK is presented. Advances in knowledge: Data and discussion on contemporary kV radiotherapy in the UK, with a particular focus on physics aspects. PMID:27730839

  19. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botta, F.; Mairani, A.; Battistoni, G.

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernelmore » (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons, within 0.8{center_dot}R{sub CSDA} (where 90%-97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9{center_dot}X{sub 90}, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution. Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less

  20. Large Eddy Simulation of Flow in Turbine Cascades Using LESTool and UNCLE Codes

    NASA Technical Reports Server (NTRS)

    Huang, P. G.

    2004-01-01

    During the period December 23,1997 and December August 31,2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Spalart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.

  1. Large Eddy Simulation of Flow in Turbine Cascades Using LEST and UNCLE Codes

    NASA Technical Reports Server (NTRS)

    Ashpis, David (Technical Monitor); Huang, P. G.

    2004-01-01

    During the period December 23, 1997 and December August 31, 2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Sparlart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.

  2. Sci-Thur PM: YIS - 07: Monte Carlo simulations to obtain several parameters required for electron beam dosimetry.

    PubMed

    Muir, B; Rogers, D; McEwen, M

    2012-07-01

    When current dosimetry protocols were written, electron beam data were limited and had uncertainties that were unacceptable for reference dosimetry. Protocols for high-energy reference dosimetry are currently being updated leading to considerable interest in accurate electron beam data. To this end, Monte Carlo simulations using the EGSnrc user-code egs_chamber are performed to extract relevant data for reference beam dosimetry. Calculations of the absorbed dose to water and the absorbed dose to the gas in realistic ion chamber models are performed as a function of depth in water for cobalt-60 and high-energy electron beams between 4 and 22 MeV. These calculations are used to extract several of the parameters required for electron beam dosimetry - the beam quality specifier, R 50 , beam quality conversion factors, k Q and k R50 , the electron quality conversion factor, k' R50 , the photon-electron conversion factor, k ecal , and ion chamber perturbation factors, P Q . The method used has the advantage that many important parameters can be extracted as a function of depth instead of determination at only the reference depth as has typically been done. Results obtained here are in good agreement with measured and other calculated results. The photon-electron conversion factors obtained for a Farmer-type NE2571 and plane-parallel PTW Roos, IBA NACP-02 and Exradin A11 chambers are 0.903, 0.896, 0.894 and 0.906, respectively. These typically differ by less than 0.7% from the contentious TG-51 values but have much smaller systematic uncertainties. These results are valuable for reference dosimetry of high-energy electron beams. © 2012 American Association of Physicists in Medicine.

  3. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less

  4. A quantification method for numerical dissipation in quasi-DNS and under-resolved DNS, and effects of numerical dissipation in quasi-DNS and under-resolved DNS of turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Komen, E. M. J.; Camilo, L. H.; Shams, A.; Geurts, B. J.; Koren, B.

    2017-09-01

    LES for industrial applications with complex geometries is mostly characterised by: a) a finite volume CFD method using a non-staggered arrangement of the flow variables and second order accurate spatial and temporal discretisation schemes, b) an implicit top-hat filter, where the filter length is equal to the local computational cell size, and c) eddy-viscosity type LES models. LES based on these three main characteristics is indicated as industrial LES in this paper. It becomes increasingly clear that the numerical dissipation in CFD codes typically used in industrial applications with complex geometries may inhibit the predictive capabilities of explicit LES. Therefore, there is a need to quantify the numerical dissipation rate in such CFD codes. In this paper, we quantify the numerical dissipation rate in physical space based on an analysis of the transport equation for the mean turbulent kinetic energy. Using this method, we quantify the numerical dissipation rate in a quasi-Direct Numerical Simulation (DNS) and in under-resolved DNS of, as a basic demonstration case, fully-developed turbulent channel flow. With quasi-DNS, we indicate a DNS performed using a second order accurate finite volume method typically used in industrial applications. Furthermore, we determine and explain the trends in the performance of industrial LES for fully-developed turbulent channel flow for four different Reynolds numbers for three different LES mesh resolutions. The presented explanation of the mechanisms behind the observed trends is based on an analysis of the turbulent kinetic energy budgets. The presented quantitative analyses demonstrate that the numerical errors in the industrial LES computations of the considered turbulent channel flows result in a net numerical dissipation rate which is larger than the subgrid-scale dissipation rate. No new computational methods are presented in this paper. Instead, the main new elements in this paper are our detailed quantification method for the numerical dissipation rate, the application of this method to a quasi-DNS and under-resolved DNS of fully-developed turbulent channel flow, and the explanation of the effects of the numerical dissipation on the observed trends in the performance of industrial LES for fully-developed turbulent channel flows.

  5. Using NJOY to Create MCNP ACE Files and Visualize Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahler, Albert Comstock

    We provide lecture materials that describe the input requirements to create various MCNP ACE files (Fast, Thermal, Dosimetry, Photo-nuclear and Photo-atomic) with the NJOY Nuclear Data Processing code system. Input instructions to visualize nuclear data with NJOY are also provided.

  6. Model Validation for Propulsion - On the TFNS and LES Subgrid Models for a Bluff Body Stabilized Flame

    NASA Technical Reports Server (NTRS)

    Wey, Thomas

    2017-01-01

    This paper summarizes the reacting results of simulating a bluff body stabilized flame experiment of Volvo Validation Rig using a releasable edition of the National Combustion Code (NCC). The turbulence models selected to investigate the configuration are the sub-grid scaled kinetic energy coupled large eddy simulation (K-LES) and the time-filtered Navier-Stokes (TFNS) simulation. The turbulence chemistry interaction used is linear eddy mixing (LEM).

  7. Analysis of regional radiotherapy dosimetry audit data and recommendations for future audits

    PubMed Central

    Palmer, A; Mzenda, B; Kearton, J; Wills, R

    2011-01-01

    Objectives Regional interdepartmental dosimetry audits within the UK provide basic assurances of the dosimetric accuracy of radiotherapy treatments. Methods This work reviews several years of audit results from the South East Central audit group including megavoltage (MV) and kilovoltage (kV) photons, electrons and iodine-125 seeds. Results Apart from some minor systematic errors that were resolved, the results of all audits have been within protocol tolerances, confirming the long-term stability and agreement of basic radiation dosimetric parameters between centres in the audit region. There is some evidence of improvement in radiation dosimetry with the adoption of newer codes of practice. Conclusion The value of current audit methods and the limitations of peer-to-peer auditing is discussed, particularly the influence of the audit schedule on the results obtained, where no “gold standard” exists. Recommendations are made for future audits, including an essential requirement to maintain the monitoring of basic fundamental dosimetry, such as MV photon and electron output, but audits must also be developed to include new treatment technologies such as image-guided radiotherapy and address the most common sources of error in radiotherapy. PMID:21159805

  8. Data Packages for the Hanford Immobilized Low Activity Tank Waste Performance Assessment 2001 Version [SEC 1 THRU 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANN, F.M.

    Data package supporting the 2001 Immobilized Low-Activity Waste Performance Analysis. Geology, hydrology, geochemistry, facility, waste form, and dosimetry data based on recent investigation are provided. Verification and benchmarking packages for selected software codes are provided.

  9. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image-based dosimetry in nuclear medicine. PMID:24200697

  10. Comparison of Flattening Filter (FF) and Flattening-Filter-Free (FFF) 6 MV photon beam characteristics for small field dosimetry using EGSnrc Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Sangeetha, S.; Sureka, C. S.

    2017-06-01

    The present study is focused to compare the characteristics of Varian Clinac 600 C/D flattened and unflattened 6 MV photon beams for small field dosimetry using EGSnrc Monte Carlo Simulation since the small field dosimetry is considered to be the most crucial and provoking task in the field of radiation dosimetry. A 6 MV photon beam of a Varian Clinac 600 C/D medical linear accelerator operates with Flattening Filter (FF) and Flattening-Filter-Free (FFF) mode for small field dosimetry were performed using EGSnrc Monte Carlo user codes (BEAMnrc and DOSXYZnrc) in order to calculate the beam characteristics using Educated-trial and error method. These includes: Percentage depth dose, lateral beam profile, dose rate delivery, photon energy spectra, photon beam uniformity, out-of-field dose, surface dose, penumbral dose and output factor for small field dosimetry (0.5×0.5 cm2 to 4×4 cm2) and are compared with magna-field sizes (5×5 cm2 to 40×40 cm2) at various depths. The results obtained showed that the optimized beam energy and Full-width-half maximum value for small field dosimetry and magna-field dosimetry was found to be 5.7 MeV and 0.13 cm for both FF and FFF beams. The depth of dose maxima for small field size deviates minimally for both FF and FFF beams similar to magna-fields. The depths greater than dmax depicts a steeper dose fall off in the exponential region for FFF beams comparing FF beams where its deviations gets increased with the increase in field size. The shape of the lateral beam profiles of FF and FFF beams varies remains similar for the small field sizes less than 4×4 cm2 whereas it varies in the case of magna-fields. Dose rate delivery for FFF beams shows an eminent increase with a two-fold factor for both small field dosimetry and magna-field sizes. The surface dose measurements of FFF beams for small field size were found to be higher whereas it gets lower for magna-fields than FF beam. The amount of out-of-field dose reduction gets increased with the increase in field size. It is also observed that the photon energy spectrum gets increased with the increase in field size for FFF beam mode. Finally, the output factors for FFF beams were relatively quite low for small field sizes than FF beams whereas it gets higher for magna-field sizes. From this study, it is concluded that the FFF beams depicted minimal deviations in the treatment field region irrespective to the normal tissue region for small field dosimetry compared to FF beams. The more prominent result observed from the study is that the shape of the beam profile remains similar for FF and FFF beams in the case of smaller field size that leads to more accurate treatment planning in the case of IMRT (Image-Guided Radiation Therapy), IGAT (Image-Guided Adaptive Radiation Therapy), SBRT (Stereotactic Body Radiation Therapy), SRS (Stereotactic Radio Surgery), and Tomotherapy techniques where homogeneous dose is not necessary. On the whole, the determination of dosimetric beam characteristics of Varian linac machine using Monte Carlo simulation provides accurate dose calculation as the clinical golden data.

  11. FLUKA simulation studies on in-phantom dosimetric parameters of a LINAC-based BNCT

    NASA Astrophysics Data System (ADS)

    Ghal-Eh, N.; Goudarzi, H.; Rahmani, F.

    2017-12-01

    The Monte Carlo simulation code, FLUKA version 2011.2c.5, has been used to estimate the in-phantom dosimetric parameters for use in BNCT studies. The in-phantom parameters of a typical Snyder head, which are necessary information prior to any clinical treatment, have been calculated with both FLUKA and MCNPX codes, which exhibit a promising agreement. The results confirm that FLUKA can be regarded as a good alternative for the MCNPX in BNCT dosimetry simulations.

  12. Dosimetry quality audit of high energy photon beams in greek radiotherapy centers.

    PubMed

    Hourdakis, Constantine J; Boziari, A

    2008-04-01

    Dosimetry quality audits and intercomparisons in radiotherapy centers is a useful tool in order to enhance the confidence for an accurate therapy and to explore and dissolve discrepancies in dose delivery. This is the first national comprehensive study that has been carried out in Greece. During 2002--2006 the Greek Atomic Energy Commission performed a dosimetry quality audit of high energy external photon beams in all (23) Greek radiotherapy centers, where 31 linacs and 13 Co-60 teletherapy units were assessed in terms of their mechanical performance characteristics and relative and absolute dosimetry. The quality audit in dosimetry of external photon beams took place by means of on-site visits, where certain parameters of the photon beams were measured, calculated and assessed according to a specific protocol and the IAEA TRS 398 dosimetry code of practice. In each radiotherapy unit (Linac or Co-60), certain functional parameters were measured and the results were compared to tolerance values and limits. Doses in water under reference and non reference conditions were measured and compared to the stated values. Also, the treatment planning systems (TPS) were evaluated with respect to irradiation time calculations. The results of the mechanical tests, dosimetry measurements and TPS evaluation have been presented in this work and discussed in detail. This study showed that Co-60 units had worse performance mechanical characteristics than linacs. 28% of all irradiation units (23% of linacs and 42% of Co-60 units) exceeded the acceptance limit at least in one mechanical parameter. Dosimetry accuracy was much worse in Co60 units than in linacs. 61% of the Co60 units exhibited deviations outside +/-3% and 31% outside +/-5%. The relevant percentages for the linacs were 24% and 7% respectively. The results were grouped for each hospital and the sources of errors (functional and human) have been investigated and discussed in details. This quality audit proved to be a useful tool for the improvement of quality in radiotherapy. It succeeded to disseminate the IAEA TRS-398 protocol in nearly all radiotherapy centers achieving homogenization and consistency of dosimetry within the country. Also, it detected discrepancies in dosimetry and provided guidance and recommendations to eliminate sources of errors. Finally, it proved that quality assurance programs, periodic quality control tests, maintenance and service play an important role for achieving accuracy and safe operation in radiotherapy.

  13. Numerical modeling of the transitional boundary layer over a flat plate

    NASA Astrophysics Data System (ADS)

    Ivanov, Dimitry; Chorny, Andrei

    2015-11-01

    Our example is connected with fundamental research on understanding how an initially laminar boundary layer becomes turbulent. We have chosen the flow over a flat plate as a prototype for boundary-layer flows around bodies. Special attention was paid to the near-wall region in order to capture all levels of the boundary layer. In this study, the numerical software package OpenFOAM has been used in order to solve the flow field. The results were used in a comparative study with data obtained from Large Eddy Simulation (LES). The composite SGS-wall model is presently incorporated into a computer code suitable for the LES of developing flat-plate boundary layers. Presently this model is extended to the LES of the zero-pressure gradient, flat-plate turbulent boundary layer. In current study the time discretization is based on a second order Crank-Nicolson/Adams-Bashforth method. LES solver using Smagorinsky and the one-equation LES turbulence models. The transition models significantly improve the prediction of the onset location compared to the fully turbulent models.LES methods appear to be the most promising new tool for the design and analysis of flow devices including transition regions of the turbulent flow.

  14. Film cooling from inclined cylindrical holes using large eddy simulations

    NASA Astrophysics Data System (ADS)

    Peet, Yulia V.

    2006-12-01

    The goal of the present study is to investigate numerically the physics of the flow, which occurs during the film cooling from inclined cylindrical holes, Film cooling is a technique used in gas turbine industry to reduce heat fluxes to the turbine blade surface. Large Eddy Simulation (LES) is performed modeling a realistic film cooling configuration, which consists of a large stagnation-type reservoir, feeding an array of discrete cooling holes (film holes) flowing into a flat plate turbulent boundary layer. Special computational methodology is developed for this problem, involving coupled simulations using multiple computational codes. A fully compressible LES code is used in the area above the flat plate, while a low Mach number LES code is employed in the plenum and film holes. The motivation for using different codes comes from the essential difference in the nature of the flow in these different regions. Flowfield is analyzed inside the plenum, film hole and a crossflow region. Flow inside the plenum is stagnating, except for the region close to the exit, where it accelerates rapidly to turn into the hole. The sharp radius of turning at the trailing edge of the plenum pipe connection causes the flow to separate from the downstream wall of the film hole. After coolant injection occurs, a complex flowfield is formed consisting of coherent vortical structures responsible for bringing hot crossflow fluid in contact with the walls of either the film hole or the blade, thus reducing cooling protection. Mean velocity and turbulent statistics are compared to experimental measurements, yielding good agreement for the mean flowfield and satisfactory agreement for the turbulence quantities. LES results are used to assess the applicability of basic assumptions of conventional eddy viscosity turbulence models used with Reynolds-averaged (RANS) approach, namely the isotropy of an eddy viscosity and thermal diffusivity. It is shown here that these assumptions do not hold for the film cooling flows. Comparison of film cooling effectiveness with experiments shows fair agreement for the centerline and laterally-averaged effectiveness. Lateral growth of the jet as judged from the lateral distribution of effectiveness is predicted correctly.

  15. MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows

    NASA Astrophysics Data System (ADS)

    van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro

    2017-08-01

    This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.

  16. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less

  17. VVER-440 and VVER-1000 reactor dosimetry benchmark - BUGLE-96 versus ALPAN VII.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duo, J. I.

    2011-07-01

    Document available in abstract form only, full text of document follows: Analytical results of the vodo-vodyanoi energetichesky reactor-(VVER-) 440 and VVER-1000 reactor dosimetry benchmarks developed from engineering mockups at the Nuclear Research Inst. Rez LR-0 reactor are discussed. These benchmarks provide accurate determination of radiation field parameters in the vicinity and over the thickness of the reactor pressure vessel. Measurements are compared to calculated results with two sets of tools: TORT discrete ordinates code and BUGLE-96 cross-section library versus the newly Westinghouse-developed RAPTOR-M3G and ALPAN VII.0. The parallel code RAPTOR-M3G enables detailed neutron distributions in energy and space in reducedmore » computational time. ALPAN VII.0 cross-section library is based on ENDF/B-VII.0 and is designed for reactor dosimetry applications. It uses a unique broad group structure to enhance resolution in thermal-neutron-energy range compared to other analogous libraries. The comparison of fast neutron (E > 0.5 MeV) results shows good agreement (within 10%) between BUGLE-96 and ALPAN VII.O libraries. Furthermore, the results compare well with analogous results of participants of the REDOS program (2005). Finally, the analytical results for fast neutrons agree within 15% with the measurements, for most locations in all three mockups. In general, however, the analytical results underestimate the attenuation through the reactor pressure vessel thickness compared to the measurements. (authors)« less

  18. The Bebig Valencia-type skin applicators: Dosimetric study and implementation of a dosimetric hybrid technique.

    PubMed

    Anagnostopoulos, Georgios; Andrássy, Michael; Baltas, Dimos

    To determine the relative dose rate distribution in water for the Bebig 20 mm and 30 mm skin applicators and report results in a form suitable for potential clinical use. Results for both skin applicators are also provided in the form of a hybrid Task Group 43 (TG-43) dosimetry technique. Furthermore, the radiation leakage around both skin applicators from the radiation protection point of view and the impact of the geometrical source position uncertainties are studied and reported. Monte Carlo simulations were performed using the MCNP 6.1 general purpose code, which was benchmarked against published dosimetry data for the Bebig Ir2.A85-2 high-dose-rate iridium-192 source, as well as the dosimetry data for the two Elekta skin applicators. Both Bebig skin applicators were modeled, and the dose rate distributions in a water phantom were calculated. The dosimetric quantities derived according to a hybrid TG-43 dosimetry technique are provided with their corresponding uncertainty values. The air kerma rate in air was simulated in the vicinity of each skin applicator to assess the radiation leakage. Results from the Monte Carlo simulations of both skin applicators are presented in the form of figures and relative dose rate tables, and additionally with the aid of the quantities defined in the hybrid TG-43 dosimetry technique and their corresponding uncertainty values. Their output factors, flatness, and penumbra values were found comparable to the Elekta skin applicators. The radiation shielding was evaluated to be adequate. The effect of potential uncertainties in source positioning on dosimetry should be investigated as part of applicator commissioning. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  19. [Determination of absorbed dose to water for high energy photon and electron beams--comparison of different dosimetry protocols].

    PubMed

    Zakaria, Golam Abu; Schütte, Wilhelm

    2003-01-01

    The determination of absorbed dose to water for high-energy photon and electron beams is performed in Germany according to the dosimetry protocol DIN 6800-2 (1997). At an international level, the main protocols used are the AAPM dosimetry protocol TG-51 (1999) and the IAEA Code of Practice TRS-398 (2000). The present paper systematically compares these three dosimetry protocols, and identifies similarities and differences. The investigations were performed using 4 and 10 MV photon beams, as well as 6, 8, 9, 10, 12 and 14 MeV electron beams. Two cylindrical and two plane-parallel type chambers were used for measurements. In general, the discrepancies among the three protocols were 1.0% for photon beams and 1.6% for electron beams. Comparative measurements in the context of measurement technical control (MTK) with TLD showed a deviation of less than 1.3% between the measurements obtained according to protocols DIN 6800-2 and MTK (exceptions: 4 MV photons with 2.9% and 6 MeV electrons with 2.4%). While only cylindrical chambers were used for photon beams, measurements of electron beams were performed using both cylindrical and plane-parallel chambers (the latter used after a cross-calibration to a cylindrical chamber, as required by the respective dosimetry protocols). Notably, unlike recommended in the corresponding protocols, we found out that cylindrical chambers can be used also for energies from 6 to 10 MeV.

  20. Comparison of Three Methods of Calculation, Experimental and Monte Carlo Simulation in Investigation of Organ Doses (Thyroid, Sternum, Cervical Vertebra) in Radioiodine Therapy

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Ayat, Saba

    2012-01-01

    Radioiodine therapy is an effective method for treating thyroid cancer carcinoma, but it has some affects on normal tissues, hence dosimetry of vital organs is important to weigh the risks and benefits of this method. The aim of this study is to measure the absorbed doses of important organs by Monte Carlo N Particle (MCNP) simulation and comparing the results of different methods of dosimetry by performing a t-paired test. To calculate the absorbed dose of thyroid, sternum, and cervical vertebra using the MCNP code, *F8 tally was used. Organs were simulated by using a neck phantom and Medical Internal Radiation Dosimetry (MIRD) method. Finally, the results of MCNP, MIRD, and Thermoluminescent dosimeter (TLD) measurements were compared by SPSS software. The absorbed dose obtained by Monte Carlo simulations for 100, 150, and 175 mCi administered 131I was found to be 388.0, 427.9, and 444.8 cGy for thyroid, 208.7, 230.1, and 239.3 cGy for sternum and 272.1, 299.9, and 312.1 cGy for cervical vertebra. The results of paired t-test were 0.24 for comparing TLD dosimetry and MIRD calculation, 0.80 for MCNP simulation and MIRD, and 0.19 for TLD and MCNP. The results showed no significant differences among three methods of Monte Carlo simulations, MIRD calculation and direct experimental dosimetry using TLD. PMID:23717806

  1. Cellular dosimetry calculations for Strontium-90 using Monte Carlo code PENELOPE.

    PubMed

    Hocine, Nora; Farlay, Delphine; Boivin, Georges; Franck, Didier; Agarande, Michelle

    2014-11-01

    To improve risk assessments associated with chronic exposure to Strontium-90 (Sr-90), for both the environment and human health, it is necessary to know the energy distribution in specific cells or tissue. Monte Carlo (MC) simulation codes are extremely useful tools for calculating deposition energy. The present work was focused on the validation of the MC code PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and the assessment of dose distribution to bone marrow cells from punctual Sr-90 source localized within the cortical bone part. S-values (absorbed dose per unit cumulated activity) calculations using Monte Carlo simulations were performed by using PENELOPE and Monte Carlo N-Particle eXtended (MCNPX). Cytoplasm, nucleus, cell surface, mouse femur bone and Sr-90 radiation source were simulated. Cells are assumed to be spherical with the radii of the cell and cell nucleus ranging from 2-10 μm. The Sr-90 source is assumed to be uniformly distributed in cell nucleus, cytoplasm and cell surface. The comparison of S-values calculated with PENELOPE to MCNPX results and the Medical Internal Radiation Dose (MIRD) values agreed very well since the relative deviations were less than 4.5%. The dose distribution to mouse bone marrow cells showed that the cells localized near the cortical part received the maximum dose. The MC code PENELOPE may prove useful for cellular dosimetry involving radiation transport through materials other than water, or for complex distributions of radionuclides and geometries.

  2. SU-F-T-111: Investigation of the Attila Deterministic Solver as a Supplement to Monte Carlo for Calculating Out-Of-Field Radiotherapy Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Lee, C; Failla, G

    Purpose: To use the Attila deterministic solver as a supplement to Monte Carlo for calculating out-of-field organ dose in support of epidemiological studies looking at the risks of second cancers. Supplemental dosimetry tools are needed to speed up dose calculations for studies involving large-scale patient cohorts. Methods: Attila is a multi-group discrete ordinates code which can solve the 3D photon-electron coupled linear Boltzmann radiation transport equation on a finite-element mesh. Dose is computed by multiplying the calculated particle flux in each mesh element by a medium-specific energy deposition cross-section. The out-of-field dosimetry capability of Attila is investigated by comparing averagemore » organ dose to that which is calculated by Monte Carlo simulation. The test scenario consists of a 6 MV external beam treatment of a female patient with a tumor in the left breast. The patient is simulated by a whole-body adult reference female computational phantom. Monte Carlo simulations were performed using MCNP6 and XVMC. Attila can export a tetrahedral mesh for MCNP6, allowing for a direct comparison between the two codes. The Attila and Monte Carlo methods were also compared in terms of calculation speed and complexity of simulation setup. A key perquisite for this work was the modeling of a Varian Clinac 2100 linear accelerator. Results: The solid mesh of the torso part of the adult female phantom for the Attila calculation was prepared using the CAD software SpaceClaim. Preliminary calculations suggest that Attila is a user-friendly software which shows great promise for our intended application. Computational performance is related to the number of tetrahedral elements included in the Attila calculation. Conclusion: Attila is being explored as a supplement to the conventional Monte Carlo radiation transport approach for performing retrospective patient dosimetry. The goal is for the dosimetry to be sufficiently accurate for use in retrospective epidemiological investigations.« less

  3. LWR pressure vessel surveillance dosimetry improvement program: LWR power reactor surveillance physics-dosimetry data base compendium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McElroy, W.N.

    1985-08-01

    This NRC physics-dosimetry compendium is a collation of information and data developed from available research and commercial light water reactor vessel surveillance program (RVSP) documents and related surveillance capsule reports. The data represents the results of the HEDL least-squares FERRET-SAND II Code re-evaluation of exposure units and values for 47 PWR and BWR surveillance capsules for W, B and W, CE, and GE power plants. Using a consistent set of auxiliary data and dosimetry-adjusted reactor physics results, the revised fluence values for E > 1 MeV averaged 25% higher than the originally reported values. The range of fluence values (new/old)more » was from a low of 0.80 to a high of 2.38. These HEDL-derived FERRET-SAND II exposure parameter values are being used for NRC-supported HEDL and other PWR and BWR trend curve data development and testing studies. These studies are providing results to support Revision 2 of Regulatory Guide 1.99. As stated by Randall (Ra84), the Guide is being updated to reflect recent studies of the physical basis for neutron radiation damage and efforts to correlate damage to chemical composition and fluence.« less

  4. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zehtabian, M; Zaker, N; Sina, S

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less

  5. Development, validation, and implementation of a patient-specific Monte Carlo 3D internal dosimetry platform

    NASA Astrophysics Data System (ADS)

    Besemer, Abigail E.

    Targeted radionuclide therapy is emerging as an attractive treatment option for a broad spectrum of tumor types because it has the potential to simultaneously eradicate both the primary tumor site as well as the metastatic disease throughout the body. Patient-specific absorbed dose calculations for radionuclide therapies are important for reducing the risk of normal tissue complications and optimizing tumor response. However, the only FDA approved software for internal dosimetry calculates doses based on the MIRD methodology which estimates mean organ doses using activity-to-dose scaling factors tabulated from standard phantom geometries. Despite the improved dosimetric accuracy afforded by direct Monte Carlo dosimetry methods these methods are not widely used in routine clinical practice because of the complexity of implementation, lack of relevant standard protocols, and longer dose calculation times. The main goal of this work was to develop a Monte Carlo internal dosimetry platform in order to (1) calculate patient-specific voxelized dose distributions in a clinically feasible time frame, (2) examine and quantify the dosimetric impact of various parameters and methodologies used in 3D internal dosimetry methods, and (3) develop a multi-criteria treatment planning optimization framework for multi-radiopharmaceutical combination therapies. This platform utilizes serial PET/CT or SPECT/CT images to calculate voxelized 3D internal dose distributions with the Monte Carlo code Geant4. Dosimetry can be computed for any diagnostic or therapeutic radiopharmaceutical and for both pre-clinical and clinical applications. In this work, the platform's dosimetry calculations were successfully validated against previously published reference doses values calculated in standard phantoms for a variety of radionuclides, over a wide range of photon and electron energies, and for many different organs and tumor sizes. Retrospective dosimetry was also calculated for various pre-clinical and clinical patients and large dosimetric differences resulted when using conventional organ-level methods and the patient-specific voxelized methods described in this work. The dosimetric impact of various steps in the 3D voxelized dosimetry process were evaluated including quantitative imaging acquisition, image coregistration, voxel resampling, ROI contouring, CT-based material segmentation, and pharmacokinetic fitting. Finally, a multi-objective treatment planning optimization framework was developed for multi-radiopharmaceutical combination therapies.

  6. PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin Mohammed; Som, Sibendu

    This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less

  7. Monte Carlo simulation of portal dosimetry on a rectilinear voxel geometry: a variable gantry angle solution.

    PubMed

    Chin, P W; Spezi, E; Lewis, D G

    2003-08-21

    A software solution has been developed to carry out Monte Carlo simulations of portal dosimetry using the BEAMnrc/DOSXYZnrc code at oblique gantry angles. The solution is based on an integrated phantom, whereby the effect of incident beam obliquity was included using geometric transformations. Geometric transformations are accurate within +/- 1 mm and +/- 1 degrees with respect to exact values calculated using trigonometry. An application in portal image prediction of an inhomogeneous phantom demonstrated good agreement with measured data, where the root-mean-square of the difference was under 2% within the field. Thus, we achieved a dose model framework capable of handling arbitrary gantry angles, voxel-by-voxel phantom description and realistic particle transport throughout the geometry.

  8. Simulation of heat and mass transfer in turbulent channel flow using the spectral-element method: effect of spatial resolution

    NASA Astrophysics Data System (ADS)

    Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.

    2016-10-01

    We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.

  9. MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes

    NASA Astrophysics Data System (ADS)

    Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.

    2017-11-01

    The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.

  10. Comparison of IPSM 1990 photon dosimetry code of practice with IAEA TRS‐398 and AAPM TG‐51.

    PubMed Central

    Henríquez, Francisco Cutanda

    2009-01-01

    Several codes of practice for photon dosimetry are currently used around the world, supported by different organizations. A comparison of IPSM 1990 with both IAEA TRS‐398 and AAPM TG‐51 has been performed. All three protocols are based on the calibration of ionization chambers in terms of standards of absorbed dose to water, as it is the case with other modern codes of practice. This comparison has been carried out for photon beams of nominal energies: 4 MV, 6 MV, 8 MV, 10 MV and 18 MV. An NE 2571 graphite ionization chamber was used in this study, cross‐calibrated against an NE 2611A Secondary Standard, calibrated in the National Physical Laboratory (NPL). Absolute dose in reference conditions was obtained using each of these three protocols including: beam quality indices, beam quality conversion factors both theoretical and NPL experimental ones, correction factors for influence quantities and absolute dose measurements. Each protocol recommendations have been strictly followed. Uncertainties have been obtained according to the ISO Guide to the Expression of Uncertainty in Measurement. Absorbed dose obtained according to all three protocols agree within experimental uncertainty. The largest difference between absolute dose results for two protocols is obtained for the highest energy: 0.7% between IPSM 1990 and IAEA TRS‐398 using theoretical beam quality conversion factors. PACS number: 87.55.tm

  11. Assessment of Hybrid RANS/LES Turbulence Models for Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.

    2010-01-01

    Predicting the noise from aircraft with exposed landing gear remains a challenging problem for the aeroacoustics community. Although computational fluid dynamics (CFD) has shown promise as a technique that could produce high-fidelity flow solutions, generating grids that can resolve the pertinent physics around complex configurations can be very challenging. Structured grids are often impractical for such configurations. Unstructured grids offer a path forward for simulating complex configurations. However, few unstructured grid codes have been thoroughly tested for unsteady flow problems in the manner needed for aeroacoustic prediction. A widely used unstructured grid code, FUN3D, is examined for resolving the near field in unsteady flow problems. Although the ultimate goal is to compute the flow around complex geometries such as the landing gear, simpler problems that include some of the relevant physics, and are easily amenable to the structured grid approaches are used for testing the unstructured grid approach. The test cases chosen for this study correspond to the experimental work on single and tandem cylinders conducted in the Basic Aerodynamic Research Tunnel (BART) and the Quiet Flow Facility (QFF) at NASA Langley Research Center. These configurations offer an excellent opportunity to assess the performance of hybrid RANS/LES turbulence models that transition from RANS in unresolved regions near solid bodies to LES in the outer flow field. Several of these models have been implemented and tested in both structured and unstructured grid codes to evaluate their dependence on the solver and mesh type. Comparison of FUN3D solutions with experimental data and numerical solutions from a structured grid flow solver are found to be encouraging.

  12. A Validation Study of the Compressible Rayleigh–Taylor Instability Comparing the Ares and Miranda Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.

    In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less

  13. A Validation Study of the Compressible Rayleigh–Taylor Instability Comparing the Ares and Miranda Codes

    DOE PAGES

    Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.

    2017-04-20

    In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less

  14. Two-dimensional dosimetry of radiotherapeutical proton beams using thermoluminescence foils.

    PubMed

    Czopyk, L; Klosowski, M; Olko, P; Swakon, J; Waligorski, M P R; Kajdrowicz, T; Cuttone, G; Cirrone, G A P; Di Rosa, F

    2007-01-01

    In modern radiation therapy such as intensity modulated radiation therapy or proton therapy, one is able to cover the target volume with improved dose conformation and to spare surrounding tissue with help of modern measurement techniques. Novel thermoluminescence dosimetry (TLD) foils, developed from the hot-pressed mixture of LiF:Mg,Cu,P (MCP TL) powder and ethylene-tetrafluoroethylene (ETFE) copolymer, have been applied for 2-D dosimetry of radiotherapeutical proton beams at INFN Catania and IFJ Krakow. A TLD reader with 70 mm heating plate and CCD camera was used to read the 2-D emission pattern of irradiated foils. The absorbed dose profiles were evaluated, taking into account correction factors specific for TLD such as dose and energy response. TLD foils were applied for measuring of dose distributions within an eye phantom and compared with predictions obtained from the MCNPX code and Eclipse Ocular Proton Planning (Varian Medical Systems) clinical radiotherapy planning system. We demonstrate the possibility of measuring 2-D dose distributions with point resolution of about 0.5 x 0.5 mm(2).

  15. Comparison of codes assessing galactic cosmic radiation exposure of aircraft crew.

    PubMed

    Bottollier-Depois, J F; Beck, P; Bennett, B; Bennett, L; Bütikofer, R; Clairand, I; Desorgher, L; Dyer, C; Felsberger, E; Flückiger, E; Hands, A; Kindl, P; Latocha, M; Lewis, B; Leuthold, G; Maczka, T; Mares, V; McCall, M J; O'Brien, K; Rollet, S; Rühm, W; Wissmann, F

    2009-10-01

    The assessment of the exposure to cosmic radiation onboard aircraft is one of the preoccupations of bodies responsible for radiation protection. Cosmic particle flux is significantly higher onboard aircraft than at ground level and its intensity depends on the solar activity. The dose is usually estimated using codes validated by the experimental data. In this paper, a comparison of various codes is presented, some of them are used routinely, to assess the dose received by the aircraft crew caused by the galactic cosmic radiation. Results are provided for periods close to solar maximum and minimum and for selected flights covering major commercial routes in the world. The overall agreement between the codes, particularly for those routinely used for aircraft crew dosimetry, was better than +/-20 % from the median in all but two cases. The agreement within the codes is considered to be fully satisfactory for radiation protection purposes.

  16. Identification of Trends into Dose Calculations for Astronauts through Performing Sensitivity Analysis on Calculational Models Used by the Radiation Health Office

    NASA Technical Reports Server (NTRS)

    Adams, Thomas; VanBaalen, Mary

    2009-01-01

    The Radiation Health Office (RHO) determines each astronaut s cancer risk by using models to associate the amount of radiation dose that astronauts receive from spaceflight missions. The baryon transport codes (BRYNTRN), high charge (Z) and energy transport codes (HZETRN), and computer risk models are used to determine the effective dose received by astronauts in Low Earth orbit (LEO). This code uses an approximation of the Boltzman transport formula. The purpose of the project is to run this code for various International Space Station (ISS) flight parameters in order to gain a better understanding of how this code responds to different scenarios. The project will determine how variations in one set of parameters such as, the point of the solar cycle and altitude can affect the radiation exposure of astronauts during ISS missions. This project will benefit NASA by improving mission dosimetry.

  17. Reactive transport modeling of uranium 238 and radium 226 in groundwater of the Königstein uranium mine, Germany

    NASA Astrophysics Data System (ADS)

    Nitzsche, O.; Merkel, B.

    Knowledge of the transport behavior of radionuclides in groundwater is needed for both groundwater protection and remediation of abandoned uranium mines and milling sites. Dispersion, diffusion, mixing, recharge to the aquifer, and chemical interactions, as well as radioactive decay, should be taken into account to obtain reliable predictions on transport of primordial nuclides in groundwater. This paper demonstrates the need for carrying out rehabilitation strategies before closure of the Königstein in-situ leaching uranium mine near Dresden, Germany. Column experiments on drilling cores with uranium-enriched tap water provided data about the exchange behavior of uranium. Uranium breakthrough was observed after more than 20 pore volumes. This strong retardation is due to the exchange of positively charged uranium ions. The code TReAC is a 1-D, 2-D, and 3-D reactive transport code that was modified to take into account the radioactive decay of uranium and the most important daughter nuclides, and to include double-porosity flow. TReAC satisfactorily simulated the breakthrough curves of the column experiments and provided a first approximation of exchange parameters. Groundwater flow in the region of the Königstein mine was simulated using the FLOWPATH code. Reactive transport behavior was simulated with TReAC in one dimension along a 6000-m path line. Results show that uranium migration is relatively slow, but that due to decay of uranium, the concentration of radium along the flow path increases. Results are highly sensitive to the influence of double-porosity flow. Résumé La protection des eaux souterraines et la restauration des sites miniers et de prétraitement d'uranium abandonnés nécessitent de connaître le comportement des radionucléides au cours de leur transport dans les eaux souterraines. La dispersion, la diffusion, le mélange, la recharge de l'aquifère et les interactions chimiques, de même que la décroissance radioactive, doivent être prises en compte pour obtenir des prédictions fiables concernant le transport des nucléides primaires dans les eaux souterraines. Ce papier montre la nécessité d'établir des stratégies de réhabilitation avant la fermeture de la mine d'uranium de Knigstein, près de Dresde (Allemagne). Des expériences de lessivage en colonne sur des carottes avec de l'eau enrichie en uranium fournissent des données sur le comportement de l'échange de l'uranium. La restitution de l'uranium a été observée après un lessivage par un volume supérieur à 20 fois celui des pores. Ce fort retard est dûà l'échange d'ions uranium positifs. Le code TReAC est un code de transport réactif en 1D, 2D et 3D, qui a été modifié pour prendre en compte la décroissance radioactive de l'uranium et les principaux nucléides descendants, et pour introduire l'écoulement dans un milieu à double porosité. TReAC a simulé de façon satisfaisante les courbes de restitution des expériences sur colonne et a fourni une première approche des paramètres de l'échange. L'écoulement souterrain dans la région de la mine de Knigstein a été simulé au moyen du code FLOWPATH. Le comportement du transport réactif a été simulé avec TReAC en une dimension, le long d'un axe d'écoulement long de 6000 m. Les résultats montrent que la migration de l'uranium est relativement lente ; mais du fait de la décroissance radioactive de l'uranium, la concentration en radium le long de cet axe augmente. Les résultats sont très sensibles à l'influence de l'écoulement en milieu à double porosité.

  18. The calibration of a Scanditronix-Wellhöfer thimble chamber for photon dosimetry using the IAEA TRS 277 code of practice.

    PubMed

    Fourie, O L

    2004-03-01

    This note investigates the calibration of a Scanditronix-Wellhöfer type FC65-G ionisation chamber to be used in clinical photon dosimetry. The current Adaptation by the Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM) of the IAEA TRS 277 dosimetry protocol makes no provision for this type of chamber. The absorbed dose to air calibration coefficient ND was therefore calculated from the air kerma calibration coefficient NK using the formalism of the IAEA TRS 277 protocol and it is shown that the value of the correction factor kmkatt for the FC65-G chamber is identical to that of the NE 2571 chamber. ND was also determined experimentally from a cross calibration against an NE 2571 dosimetry. It was found that there is a good correspondence between the calculated and measured values. To establish to what extent the ACPSEM Adaptation can be used for the FC65-G chamber, values for the ratio of stopping powers in water and air (Sw,air)Q and the perturbation correction factor pQ were calculated using the TRS 277 protocol. From these results it is shown that over the range of beam qualities TPR20,10 = 0.59 to TPR20,10 = 0.78 the Adaptation can be used for the FC65-G chamber.

  19. Ionization chamber dosimetry of small photon fields: a Monte Carlo study on stopping-power ratios for radiosurgery and IMRT beams.

    PubMed

    Sánchez-Doblado, F; Andreo, P; Capote, R; Leal, A; Perucha, M; Arráns, R; Núñez, L; Mainegra, E; Lagares, J I; Carrasco, E

    2003-07-21

    Absolute dosimetry with ionization chambers of the narrow photon fields used in stereotactic techniques and IMRT beamlets is constrained by lack of electron equilibrium in the radiation field. It is questionable that stopping-power ratio in dosimetry protocols, obtained for broad photon beams and quasi-electron equilibrium conditions, can be used in the dosimetry of narrow fields while keeping the uncertainty at the same level as for the broad beams used in accelerator calibrations. Monte Carlo simulations have been performed for two 6 MV clinical accelerators (Elekta SL-18 and Siemens Mevatron Primus), equipped with radiosurgery applicators and MLC. Narrow circular and Z-shaped on-axis and off-axis fields, as well as broad IMRT configured beams, have been simulated together with reference 10 x 10 cm2 beams. Phase-space data have been used to generate 3D dose distributions which have been compared satisfactorily with experimental profiles (ion chamber, diodes and film). Photon and electron spectra at various depths in water have been calculated, followed by Spencer-Attix (delta = 10 keV) stopping-power ratio calculations which have been compared to those used in the IAEA TRS-398 code of practice. For water/air and PMMA/air stopping-power ratios, agreements within 0.1% have been obtained for the 10 x 10 cm2 fields. For radiosurgery applicators and narrow MLC beams, the calculated s(w,air) values agree with the reference within +/-0.3%, well within the estimated standard uncertainty of the reference stopping-power ratios (0.5%). Ionization chamber dosimetry of narrow beams at the photon qualities used in this work (6 MV) can therefore be based on stopping-power ratios data in dosimetry protocols. For a modulated 6 MV broad beam used in clinical IMRT, s(w,air) agrees within 0.1% with the value for 10 x 10 cm2, confirming that at low energies IMRT absolute dosimetry can also be based on data for open reference fields. At higher energies (24 MV) the difference in s(w,air) was up to 1.1%, indicating that the use of protocol data for narrow beams in such cases is less accurate than at low energies, and detailed calculations of the dosimetry parameters involved should be performed if similar accuracy to that of 6 MV is sought.

  20. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  1. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.

    1994-06-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  2. Large Eddy Simulation of Engineering Flows: A Bill Reynolds Legacy.

    NASA Astrophysics Data System (ADS)

    Moin, Parviz

    2004-11-01

    The term, Large eddy simulation, LES, was coined by Bill Reynolds, thirty years ago when he and his colleagues pioneered the introduction of LES in the engineering community. Bill's legacy in LES features his insistence on having a proper mathematical definition of the large scale field independent of the numerical method used, and his vision for using numerical simulation output as data for research in turbulence physics and modeling, just as one would think of using experimental data. However, as an engineer, Bill was pre-dominantly interested in the predictive capability of computational fluid dynamics and in particular LES. In this talk I will present the state of the art in large eddy simulation of complex engineering flows. Most of this technology has been developed in the Department of Energy's ASCI Program at Stanford which was led by Bill in the last years of his distinguished career. At the core of this technology is a fully implicit non-dissipative LES code which uses unstructured grids with arbitrary elements. A hybrid Eulerian/ Largangian approach is used for multi-phase flows, and chemical reactions are introduced through dynamic equations for mixture fraction and reaction progress variable in conjunction with flamelet tables. The predictive capability of LES is demonstrated in several validation studies in flows with complex physics and complex geometry including flow in the combustor of a modern aircraft engine. LES in such a complex application is only possible through efficient utilization of modern parallel super-computers which was recognized and emphasized by Bill from the beginning. The presentation will include a brief mention of computer science efforts for efficient implementation of LES.

  3. Application of the High Gradient hydrodynamics code to simulations of a two-dimensional zero-pressure-gradient turbulent boundary layer over a flat plate

    NASA Astrophysics Data System (ADS)

    Kaiser, Bryan E.; Poroseva, Svetlana V.; Canfield, Jesse M.; Sauer, Jeremy A.; Linn, Rodman R.

    2013-11-01

    The High Gradient hydrodynamics (HIGRAD) code is an atmospheric computational fluid dynamics code created by Los Alamos National Laboratory to accurately represent flows characterized by sharp gradients in velocity, concentration, and temperature. HIGRAD uses a fully compressible finite-volume formulation for explicit Large Eddy Simulation (LES) and features an advection scheme that is second-order accurate in time and space. In the current study, boundary conditions implemented in HIGRAD are varied to find those that better reproduce the reduced physics of a flat plate boundary layer to compare with complex physics of the atmospheric boundary layer. Numerical predictions are compared with available DNS, experimental, and LES data obtained by other researchers. High-order turbulence statistics are collected. The Reynolds number based on the free-stream velocity and the momentum thickness is 120 at the inflow and the Mach number for the flow is 0.2. Results are compared at Reynolds numbers of 670 and 1410. A part of the material is based upon work supported by NASA under award NNX12AJ61A and by the Junior Faculty UNM-LANL Collaborative Research Grant.

  4. Monte Carlo treatment planning for molecular targeted radiotherapy within the MINERVA system

    NASA Astrophysics Data System (ADS)

    Lehmann, Joerg; Hartmann Siantar, Christine; Wessol, Daniel E.; Wemple, Charles A.; Nigg, David; Cogliati, Josh; Daly, Tom; Descalle, Marie-Anne; Flickinger, Terry; Pletcher, David; DeNardo, Gerald

    2005-03-01

    The aim of this project is to extend accurate and patient-specific treatment planning to new treatment modalities, such as molecular targeted radiation therapy, incorporating previously crafted and proven Monte Carlo and deterministic computation methods. A flexible software environment is being created that allows planning radiation treatment for these new modalities and combining different forms of radiation treatment with consideration of biological effects. The system uses common input interfaces, medical image sets for definition of patient geometry and dose reporting protocols. Previously, the Idaho National Engineering and Environmental Laboratory (INEEL), Montana State University (MSU) and Lawrence Livermore National Laboratory (LLNL) had accrued experience in the development and application of Monte Carlo based, three-dimensional, computational dosimetry and treatment planning tools for radiotherapy in several specialized areas. In particular, INEEL and MSU have developed computational dosimetry systems for neutron radiotherapy and neutron capture therapy, while LLNL has developed the PEREGRINE computational system for external beam photon-electron therapy. Building on that experience, the INEEL and MSU are developing the MINERVA (modality inclusive environment for radiotherapeutic variable analysis) software system as a general framework for computational dosimetry and treatment planning for a variety of emerging forms of radiotherapy. In collaboration with this development, LLNL has extended its PEREGRINE code to accommodate internal sources for molecular targeted radiotherapy (MTR), and has interfaced it with the plugin architecture of MINERVA. Results from the extended PEREGRINE code have been compared to published data from other codes, and found to be in general agreement (EGS4—2%, MCNP—10%) (Descalle et al 2003 Cancer Biother. Radiopharm. 18 71-9). The code is currently being benchmarked against experimental data. The interpatient variability of the drug pharmacokinetics in MTR can only be properly accounted for by image-based, patient-specific treatment planning, as has been common in external beam radiation therapy for many years. MINERVA offers 3D Monte Carlo-based MTR treatment planning as its first integrated operational capability. The new MINERVA system will ultimately incorporate capabilities for a comprehensive list of radiation therapies. In progress are modules for external beam photon-electron therapy and boron neutron capture therapy (BNCT). Brachytherapy and proton therapy are planned. Through the open application programming interface (API), other groups can add their own modules and share them with the community.

  5. Determination of absorbed dose to water for high-energy photon and electron beams-comparison of the standards DIN 6800-2 (1997), IAEA TRS 398 (2000) and DIN 6800-2 (2006)

    PubMed Central

    Zakaria, Golam Abu; Schuette, Wilhelm

    2007-01-01

    For the determination of the absorbed dose to water for high-energy photon and electron beams the IAEA code of practice TRS-398 (2000) is applied internationally. In Germany, the German dosimetry protocol DIN 6800-2 (1997) is used. Recently, the DIN standard has been revised and published as Draft National Standard DIN 6800-2 (2006). It has adopted widely the methodology and dosimetric data of the code of practice. This paper compares these three dosimetry protocols systematically and identifies similarities as well as differences. The investigation was done with 6 and 18 MV photon as well as 5 to 21 MeV electron beams. While only cylindrical chambers were used for photon beams, measurements of electron beams were performed using cylindrical as well as plane-parallel chambers. The discrepancies in the determination of absorbed dose to water between the three protocols were 0.4% for photon beams and 1.5% for electron beams. Comparative measurements showed a deviation of less than 0.5% between our measurements following protocol DIN 6800-2 (2006) and TLD inter-comparison procedure in an external audit. PMID:21217912

  6. Determination of absorbed dose to water for high-energy photon and electron beams-comparison of the standards DIN 6800-2 (1997), IAEA TRS 398 (2000) and DIN 6800-2 (2006).

    PubMed

    Zakaria, Golam Abu; Schuette, Wilhelm

    2007-01-01

    For the determination of the absorbed dose to water for high-energy photon and electron beams the IAEA code of practice TRS-398 (2000) is applied internationally. In Germany, the German dosimetry protocol DIN 6800-2 (1997) is used. Recently, the DIN standard has been revised and published as Draft National Standard DIN 6800-2 (2006). It has adopted widely the methodology and dosimetric data of the code of practice. This paper compares these three dosimetry protocols systematically and identifies similarities as well as differences. The investigation was done with 6 and 18 MV photon as well as 5 to 21 MeV electron beams. While only cylindrical chambers were used for photon beams, measurements of electron beams were performed using cylindrical as well as plane-parallel chambers. The discrepancies in the determination of absorbed dose to water between the three protocols were 0.4% for photon beams and 1.5% for electron beams. Comparative measurements showed a deviation of less than 0.5% between our measurements following protocol DIN 6800-2 (2006) and TLD inter-comparison procedure in an external audit.

  7. Ex-vessel neutron dosimetry analysis for westinghouse 4-loop XL pressurized water reactor plant using the RadTrack{sup TM} Code System with the 3D parallel discrete ordinates code RAPTOR-M3G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Alpan, F. A.; Fischer, G.A.

    2011-07-01

    Traditional two-dimensional (2D)/one-dimensional (1D) SYNTHESIS methodology has been widely used to calculate fast neutron (>1.0 MeV) fluence exposure to reactor pressure vessel in the belt-line region. However, it is expected that this methodology cannot provide accurate fast neutron fluence calculation at elevations far above or below the active core region. A three-dimensional (3D) parallel discrete ordinates calculation for ex-vessel neutron dosimetry on a Westinghouse 4-Loop XL Pressurized Water Reactor has been done. It shows good agreement between the calculated results and measured results. Furthermore, the results show very different fast neutron flux values at some of the former plate locationsmore » and elevations above and below an active core than those calculated by a 2D/1D SYNTHESIS method. This indicates that for certain irregular reactor internal structures, where the fast neutron flux has a very strong local effect, it is required to use a 3D transport method to calculate accurate fast neutron exposure. (authors)« less

  8. Reactor Dosimetry State of the Art 2008

    NASA Astrophysics Data System (ADS)

    Voorbraak, Wim; Debarberis, Luigi; D'Hondt, Pierre; Wagemans, Jan

    2009-08-01

    Oral session 1: Retrospective dosimetry. Retrospective dosimetry of VVER 440 reactor pressure vessel at the 3rd unit of Dukovany NPP / M. Marek ... [et al.]. Retrospective dosimetry study at the RPV of NPP Greifswald unit 1 / J. Konheiser ... [et al.]. Test of prototype detector for retrospective neutron dosimetry of reactor internals and vessel / K. Hayashi ... [et al.]. Neutron doses to the concrete vessel and tendons of a magnox reactor using retrospective dosimetry / D. A. Allen ... [et al.]. A retrospective dosimetry feasibility study for Atucha I / J. Wagemans ... [et al.]. Retrospective reactor dosimetry with zirconium alloy samples in a PWR / L. R. Greenwood and J. P. Foster -- Oral session 2: Experimental techniques. Characterizing the Time-dependent components of reactor n/y environments / P. J. Griffin, S. M. Luker and A. J. Suo-Anttila. Measurements of the recoil-ion response of silicon carbide detectors to fast neutrons / F. H. Ruddy, J. G. Seidel and F. Franceschini. Measurement of the neutron spectrum of the HB-4 cold source at the high flux isotope reactor at Oak Ridge National Laboratory / J. L. Robertson and E. B. Iverson. Feasibility of cavity ring-down laser spectroscopy for dose rate monitoring on nuclear reactor / H. Tomita ... [et al.]. Measuring transistor damage factors in a non-stable defect environment / D. B. King ... [et al.]. Neutron-detection based monitoring of void effects in boiling water reactors / J. Loberg ... [et al.] -- Poster session 1: Power reactor surveillance, retrospective dosimetry, benchmarks and inter-comparisons, adjustment methods, experimental techniques, transport calculations. Improved diagnostics for analysis of a reactor pulse radiation environment / S. M. Luker ... [et al.]. Simulation of the response of silicon carbide fast neutron detectors / F. Franceschini, F. H. Ruddy and B. Petrović. NSV A-3: a computer code for least-squares adjustment of neutron spectra and measured dosimeter responses / J. G. Williams, A. P. Ribaric and T. Schnauber. Agile high-fidelity MCNP model development techniques for rapid mechanical design iteration / J. A. Kulesza.Extension of Raptor-M3G to r-8-z geometry for use in reactor dosimetry applications / M. A. Hunter, G. Longoni and S. L. Anderson. In vessel exposure distributions evaluated with MCNP5 for Atucha II / J. M. Longhino, H. Blaumann and G. Zamonsky. Atucha I nuclear power plant azimutal ex-vessel flux profile evaluation / J. M. Longhino ... [et al.]. UFTR thermal column characterization and redesign for maximized thermal flux / C. Polit and A. Haghighat. Activation counter using liquid light-guide for dosimetry of neutron burst / M. Hayashi ... [et al.]. Control rod reactivity curves for the annular core research reactor / K. R. DePriest ... [et al.]. Specification of irradiation conditions in VVER-440 surveillance positions / V. Kochkin ... [et al.]. Simulations of Mg-Ar ionisation and TE-TE ionisation chambers with MCNPX in a straightforward gamma and beta irradiation field / S. Nievaart ... [et al.]. The change of austenitic stainless steel elements content in the inner parts of VVER-440 reactor during operation / V. Smutný, J. Hep and P. Novosad. Fast neutron environmental spectrometry using disk activation / G. Lövestam ... [et al.]. Optimization of the neutron activation detector location scheme for VVER-lOOO ex-vessel dosimetry / V. N. Bukanov ... [et al.]. Irradiation conditions for surveillance specimens located into plane containers installed in the WWER-lOOO reactor of unit 2 of the South-Ukrainian NPP / O. V. Grytsenko. V. N. Bukanov and S. M. Pugach. Conformity between LRO mock-ups and VVERS NPP RPV neutron flux attenuation / S. Belousov. Kr. Ilieva and D. Kirilova. FLUOLE: a new relevant experiment for PWR pressure vessel surveillance / D. Beretz ... [et al.]. Transport of neutrons and photons through the iron and water layers / M. J. Kost'ál ... [et al.]. Condition evaluation of spent nuclear fuel assemblies from the first-generation nuclear-powered submarines by gamma scanning / A. F. Usatyi. L. A. Serdyukova and B. S. Stepennov -- Oral session 3: Power plant surveillance. Upgraded neutron dosimetry procedure for VVER-440 surveillance specimens / V. Kochkin ... [et al.]. Neutron dosimetry on the full-core first generation VVER-440 aimed to reactor support structure load evaluation / P. Borodkin ... [et al.]. Ex-vessel neutron dosimetry programs for PWRs in Korea / C. S. Yoo. B. C. Kim and C. C. Kim. Comparison of irradiation conditions of VVER-1000 reactor pressure vessel and surveillance specimens for various core loadings / V. N. Bukanov ... [et al.]. Re-evaluation of dosimetry in the new surveillance program for the Loviisa 1 VVER-440 reactor / T. Serén -- Oral session 4: Benchmarks, intercomparisons and adjustment methods. Determination of the neutron parameter's uncertainties using the stochastic methods of uncertainty propagation and analysis / G. Grégoire ... [et al.].Covariance matrices for calculated neutron spectra and measured dosimeter responses / J. G. Williams ... [et al.]. The role of dosimetry at the high flux reactor / S. C. van der Marek ... [et al.]. Calibration of a manganese bath relative to Cf-252 nu-bar / D. M. Gilliam, A. T. Yue and M. Scott Dewey. Major upgrade of the reactor dosimetry interpretation methodology used at the CEA: general principle / C. Destouches ... [et al.] -- Oral session 5: power plant surveillance. The role of ex-vessel neutron dosimetry in reactor vessel surveillance in South Korea / B.-C. Kim ... [et al.]. Spanish RPV surveillance programmes: lessons learned and current activities / A. Ballesteros and X. Jardí. Atucha I nuclear power plant extended dosimetry and assessment / H. Blaumann ... [et al.]. Monitoring of radiation load of pressure vessels of Russian VVER in compliance with license amendments / G. Borodkin ... [et al.] -- Poster session 2: Test reactors, accelerators and advanced systems; cross sections, nuclear data, damage correlations. Two-dimensional mapping of the calculated fission power for the full-size fuel plate experiment irradiated in the advanced test reactor / G. S. Chang and M. A. Lillo. The radiation safety information computational center: a resource for reactor dosimetry software and nuclear data / B. L. Kirk. Irradiated xenon isotopic ratio measurement for failed fuel detection and location in fast reactor / C. Ito, T. Iguchi and H. Harano. Characterization of dosimetry of the BMRR horizontal thimble tubes and broad beam facility / J.-P. Hu, R. N. Reciniello and N. E. Holden. 2007 nuclear data review / N. E. Holden. Further dosimetry studies at the Rhode Island nuclear science / R. N. Reciniello ... [et al.]. Characterization of neutron fields in the experimental fast reactor Joyo MK-III core / S. Maeda ... [et al.]. Measuring [symbol]Li(n, t) and [symbol]B(n, [symbol]) cross sections using the NIST alpha-gamma apparatus / M. S. Dewey ... [et al.]. Improvement of neutron/gamma field evaluation for restart of JMTR / Y. Nagao ... [et al.]. Monitoring of the irradiated neutron fluence in the neutron transmutation doping process of HANARO / M.-S. Kim and S.-J. Park.Training reactor VR-l neutron spectrum determination / M. Vins, A. Kolros and K. Katovsky. Differential cross sections for gamma-ray production by 14 MeV neutrons on iron and bismuth / V. M. Bondar ... [et al.]. The measurements of the differential elastic neutron cross-sections of carbon for energies from 2 to 133 ke V / O. Gritzay ... [et al.]. Determination of neutron spectrum by the dosimetry foil method up to 35 Me V / S. P. Simakov ... [et al.]. Extension of the BGL broad group cross section library / D. Kirilova, S. Belousov and Kr. Ilieva. Measurements of neutron capture cross-section for tantalum at the neutron filtered beams / O. Gritzayand V. Libman. Measurements of microscopic data at GELINA in support of dosimetry / S. Kopecky ... [et al.]. Nuclide guide and international chart of nuclides - 2008 / T. Golashvili -- Oral session 6: Test reactors, accelerators and advanced systems. Neutronic analyses in support of the HFIR beamline modifications and lifetime extension / I. Remec and E. D. Blakeman. Characterization of neutron test facilities at Sandia National Laboratories / D. W. Vehar ... [et al.]. LYRA irradiation experiments: neutron metrology and dosimetry / B. Acosta and L. Debarberis. Calculated neutron and gamma-ray spectra across the prismatic very high temperature reactor core / J. W. Sterbentz. Enhancement of irradiation capability of the experimental fast reactor joyo / S. Maeda ... [et al.]. Neutron spectrum analyses by foil activation method for high-energy proton beams / C. H. Pyeon ... [et al.] -- Oral session 7: Cross sections, nuclear data, damage correlations. Investigation of new reaction cross-section evaluations in order to update and extend the IRDF-2002 reactor dosimetry library / É. M. Zsolnay, H. J. Nolthenius and A. L. Nichols. A novel approach towards DPA calculations / A. Hogenbirk and D. F. Da Cruz. A new ENDFIB-VII.O based multigroup cross-section library for reactor dosimetry / F. A. Alpan and S. L. Anderson. Activities at the NEA for dosimetry applications / H. Henriksson and I. Kodeli. Validation and verification of covariance data from dosimetry reaction cross-section evaluations / S. Badikov. Status of the neutron cross section standards / A. D. Carlson -- Oral session 8: transport calculations. A dosimetry assessment for the core restraint of an advanced gas cooled reactor / D. A. Thornton ... [et al.]. Neutron dosimetry study in the region of the support structure of a VVER-1000 type reactor / G. Borodkin ... [et al.]. SNS moderator poison design and experiment validation of the moderator performance / W. Lu ... [et al.]. Analysis of OSIRIS in-core surveillance dosimetry for GONDOLE steel irradiation program by using TRIPOLI-4 Monte Carlo code / Y. K. Lee and F. Malouch.Reactor dosimetry applications using RAPTOR-M3G: a new parallel 3-D radiation transport code / G. Longoni and S. L. Anderson.

  9. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 03: Energy dependence of a clinical probe-format calorimeter and its pertinence to absolute photon and electron beam dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, James; Seuntjens, Jan; Sarfehnia, Arman

    Purpose: To evaluate the intrinsic and absorbed-dose energy dependence of a small-scale graphite calorimeter probe (GPC) developed for use as a routine clinical dosimeter. The influence of charge deposition on the response of the GPC was also assessed by performing absolute dosimetry in clinical linac-based electron beams. Methods: Intrinsic energy dependence was determined by performing constant-temperature calorimetry dose measurements in a water-equivalent solid phantom, under otherwise reference conditions, in five high-energy photon (63.5 < %dd(10){sub X} < 76.3), and five electron (2.3 cm < R{sub 50} < 8.3 cm) beams. Reference dosimetry was performed for all beams in question usingmore » an Exradin A19 ion chamber with a calibration traceable to national standards. The absorbed-dose component of the overall energy dependence was calculated using the EGSnrc egs-chamber user code. Results: A total of 72 measurements were performed with the GPC, resulting in a standard error on the mean absorbed dose of better than 0.3 % for all ten beams. For both the photon and electron beams, no statistically-significant energy dependence was observed experimentally. Peak-to-peak, variations in the relative response of the GPC across all beam qualities of a given radiation type were on the order of 1 %. No effects, either transient or permanent, were attributable to the charge deposited by the electron beams. Conclusions: The GPC’s apparent energy-independence, combined with its well-established linearity and dose rate independence, make it a potentially useful dosimetry system capable measuring photon and electron doses in absolute terms at the clinical level.« less

  10. The visible signal responsible for proton therapy dosimetry using bare optical fibers is not Čerenkov radiation.

    PubMed

    Darafsheh, Arash; Taleei, Reza; Kassaee, Alireza; Finlay, Jarod C

    2016-11-01

    Proton beam dosimetry using bare plastic optical fibers has emerged as a simple approach to proton beam dosimetry. The source of the signal in this method has been attributed to Čerenkov radiation. The aim of this work was a phenomenological study of the nature of the visible light responsible for the signal in bare fiber optic dosimetry of proton therapy beams. Plastic fiber optic probes embedded in solid water phantoms were irradiated with proton beams of energies 100, 180, and 225 MeV produced by a proton therapy cyclotron. Luminescence spectroscopy was performed by a CCD-coupled spectrometer. The spectra were acquired at various depths in phantom to measure the percentage depth dose (PDD) for each beam energy. For comparison, the PDD curves were acquired using a standard multilayer ion chamber device. In order to further analyze the contribution of the Čerenkov radiation in the spectra, Monte Carlo simulation was performed using fluka Monte Carlo code to stochastically simulate radiation transport, ionizing radiation dose deposition, and optical emission of Čerenkov radiation. The measured depth doses using the bare fiber are in agreement with measurements performed by the multilayer ion chamber device, indicating the feasibility of using bare fiber probes for proton beam dosimetry. The spectroscopic study of proton-irradiated fibers showed a continuous spectrum with a shape different from that of Čerenkov radiation. The Monte Carlo simulations confirmed that the amount of the generated Čerenkov light does not follow the radiation absorbed dose in a medium. The source of the optical signal responsible for the proton dose measurement using bare optical fibers is not Čerenkov radiation. It is fluorescence of the plastic material of the fiber.

  11. Computer Code for the Determination of Ejection Seat/Man Aerodynamic Parameters.

    DTIC Science & Technology

    1980-08-28

    ARMS, and LES (computer code -- .,. ,... ,, ..,.., .: . .. ... ,-." . ;.’ -- I- ta names) and Seat consisted of 4 panels SEAT, BACK, PADD , and SIDE. An... general application of Eq. (I) is for blunt bodies at hypersonic speed, because accuracy of this equation becomes better at higher Mach number. Therefore...pressure coefficient is set equal to zero on those portions of the body that are invisible to a distant observer who views the body from the direction

  12. A Selection of Experimental Test Cases for the Validation of CFD Codes (Recueil de cas d’essai experimentaux pour la validation des codes de l’aerodynamique numerique). Volume 1

    DTIC Science & Technology

    1994-08-01

    volume H1. Le rapport ext accompagnt5 doun jeo die disqoettex contenant les donn~es appropri~es Li bous let cas d’essai. (’es disqoettes sont disponibles ...GERMANY PURPL’Sb OF THE TESi The tests are part of a larger effort to establish a database of experimental measurements for missile configurations

  13. Computation of unsteady turbomachinery flows: Part 2—LES and hybrids

    NASA Astrophysics Data System (ADS)

    Tucker, P. G.

    2011-10-01

    The choice of turbulence model can have a strong impact on results for many turbomachinery zones. Palliative corrections to them and also transition modeling can have a further profound solution impact. The spectral gaps necessary for theoretically valid URANS solutions are also lacking in certain turbomachinery zones. Large Eddy Simulation (LES) alleviates the serious area of turbulence modeling uncertainty but with an extreme increase in computational cost. However, there seems a lack of validation data to explore in depth the performance of LES and thus strategies to refine it. LES best practices are needed. Although LES is, obviously, much less model dependent than RANS, grids currently used for more practical simulations are clearly insufficiently fine for the LES model and numerical schemes not to be playing an excessively strong role. Very few turbomachinery simulations make use of properly constructed, correlated turbulence inflow. Even if this is attempted, most measurement sets are incomplete and lack an adequate basis for modeling this inflow. Gas turbines are highly complex coupled systems and hence inflow and outflow boundary condition specification needs to go beyond just synthesizing turbulent structures and preventing their reflection. Despite the strong limitations of the dissipative Smagorinsky model, it still sees the most wide spread use, generally, in excessively dissipative flow solvers. Monotone Integrated LES (MILES) related approaches, hybrid LES-RANS and more advanced LES models seem to have an equal but subservient frequency of use in turbomachinery applications. Clearly the introduction of a RANS layer can have a substantial accuracy penalty. However, it does allow LES to be rationally used, albeit in a diluted sense for industrial applications. The Reynolds numbers found in turbomachinery are substantial. However, in certain areas evidence suggests they will not be enough to ensure a long inertial subrange and hence the use of standard LES modeling practices. Despite the excessively coarse grids used in much of the LES work reviewed, with essentially RANS based codes, meaningful results are often gained. This can perhaps be attributed to the choice of cases, these being ones for which RANS modeling gives extremely poor performance. It is a concern that for practical turbomachinery LES studies grid densities used tend to have an Reynolds number scaling to a strong negative power.

  14. A boundary-representation method for designing whole-body radiation dosimetry models: pregnant females at the ends of three gestational periods—RPI-P3, -P6 and -P9

    NASA Astrophysics Data System (ADS)

    Xu, X. George; Taranenko, Valery; Zhang, Juying; Shi, Chengyu

    2007-12-01

    Fetuses are extremely radiosensitive and the protection of pregnant females against ionizing radiation is of particular interest in many health and medical physics applications. Existing models of pregnant females relied on simplified anatomical shapes or partial-body images of low resolutions. This paper reviews two general types of solid geometry modeling: constructive solid geometry (CSG) and boundary representation (BREP). It presents in detail a project to adopt the BREP modeling approach to systematically design whole-body radiation dosimetry models: a pregnant female and her fetus at the ends of three gestational periods of 3, 6 and 9 months. Based on previously published CT images of a 7-month pregnant female, the VIP-Man model and mesh organ models, this new set of pregnant female models was constructed using 3D surface modeling technologies instead of voxels. The organ masses were adjusted to agree with the reference data provided by the International Commission on Radiological Protection (ICRP) and previously published papers within 0.5%. The models were then voxelized for the purpose of performing dose calculations in identically implemented EGS4 and MCNPX Monte Carlo codes. The agreements of the fetal doses obtained from these two codes for this set of models were found to be within 2% for the majority of the external photon irradiation geometries of AP, PA, LAT, ROT and ISO at various energies. It is concluded that the so-called RPI-P3, RPI-P6 and RPI-P9 models have been reliably defined for Monte Carlo calculations. The paper also discusses the needs for future research and the possibility for the BREP method to become a major tool in the anatomical modeling for radiation dosimetry.

  15. Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases

    NASA Technical Reports Server (NTRS)

    Woodruff, Stephen

    2016-01-01

    NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.

  16. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    NASA Astrophysics Data System (ADS)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  17. LES FOR SIMULATING THE GAS EXCHANGE PROCESS IN A SPARK IGNITION ENGINE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M; yang, xiaofeng; kuo, tang-wei

    2015-01-01

    The gas exchange process is known to be a significant source of cyclic variability in Internal Combustion Engines (ICE). Traditionally, Large Eddy Simulations (LES) are expected to capture these cycle-to-cycle variations. This paper reports a numerical effort to establish best practices for capturing cyclic variability with LES tools in a Transparent Combustion Chamber (TCC) spark ignition engine. The main intention is to examine the sensitivity of cycle averaged mean and Root Mean Square (RMS) flow fields and Proper Orthogonal Decomposition (POD) modes to different computational hardware, adaptive mesh refinement (AMR) and LES sub-grid scale (SGS) models, since these aspects havemore » received little attention in the past couple of decades. This study also examines the effect of near-wall resolution on the predicted wall shear stresses. LES is pursued with commercially available CONVERGE code. Two different SGS models are tested, a one-equation eddy viscosity model and dynamic structure model. The results seem to indicate that both mean and RMS fields without any SGS model are not much different than those with LES models, either one-equation eddy viscosity or dynamic structure model. Computational hardware results in subtle quantitative differences, especially in RMS distributions. The influence of AMR on both mean and RMS fields is negligible. The predicted shear stresses near the liner walls is also found to be relatively insensitive to near-wall resolution except in the valve curtain region.« less

  18. SU-F-T-367: Using PRIMO, a PENELOPE-Based Software, to Improve the Small Field Dosimetry of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benmakhlouf, H; Andreo, P; Brualla, L

    2016-06-15

    Purpose: To calculate output correction factors for Varian Clinac 2100iX beams for seven small field detectors and use the values to determine the small field output factors for the linacs at Karolinska university hospital. Methods: Phase space files (psf) for square fields between 0.25cm and 10cm were calculated using the PENELOPE-based PRIMO software. The linac MC-model was tuned by comparing PRIMO-estimated and experimentally determined depth doses and lateral dose-profiles for 40cmx40cm fields. The calculated psf were used as radiation sources to calculate the correction factors of IBA and PTW detectors with the code penEasy/PENELOPE. Results: The optimal tuning parameters ofmore » the MClinac model in PRIMO were 5.4 MeV incident electron energy and zero energy spread, focal spot size and beam divergence. Correction factors obtained for the liquid ion chamber (PTW-T31018) are within 1% down to 0.5 cm fields. For unshielded diodes (IBA-EFD, IBA-SFD, PTW-T60017 and PTW-T60018) the corrections are up to 2% at intermediate fields (>1cm side), becoming down to −11% for fields smaller than 1cm. The shielded diode (IBA-PFD and PTW-T60016) corrections vary with field size from 0 to −4%. Volume averaging effects are found for most detectors in the presence of 0.25cm fields. Conclusion: Good agreement was found between correction factors based on PRIMO-generated psf and those from other publications. The calculated factors will be implemented in output factor measurements (using several detectors) in the clinic. PRIMO is a userfriendly general code capable of generating small field psf and can be used without having to code own linac geometries. It can therefore be used to improve the clinical dosimetry, especially in the commissioning of linear accelerators. Important dosimetry data, such as dose-profiles and output factors can be determined more accurately for a specific machine, geometry and setup by using PRIMO and having a MC-model of the detector used.« less

  19. SU-F-T-12: Monte Carlo Dosimetry of the 60Co Bebig High Dose Rate Source for Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campos, L T; Almeida, C E V de

    Purpose: The purpose of this work is to obtain the dosimetry parameters in accordance with the AAPM TG-43U1 formalism with Monte Carlo calculations regarding the BEBIG 60Co high-dose-rate brachytherapy. The geometric design and material details of the source was provided by the manufacturer and was used to define the Monte Carlo geometry. Methods: The dosimetry studies included the calculation of the air kerma strength Sk, collision kerma in water along the transverse axis with an unbounded phantom, dose rate constant and radial dose function. The Monte Carlo code system that was used was EGSnrc with a new cavity code, whichmore » is a part of EGS++ that allows calculating the radial dose function around the source. The XCOM photon cross-section library was used. Variance reduction techniques were used to speed up the calculation and to considerably reduce the computer time. To obtain the dose rate distributions of the source in an unbounded liquid water phantom, the source was immersed at the center of a cube phantom of 100 cm3. Results: The obtained dose rate constant for the BEBIG 60Co source was 1.108±0.001 cGyh-1U-1, which is consistent with the values in the literature. The radial dose functions were compared with the values of the consensus data set in the literature, and they are consistent with the published data for this energy range. Conclusion: The dose rate constant is consistent with the results of Granero et al. and Selvam and Bhola within 1%. Dose rate data are compared to GEANT4 and DORZnrc Monte Carlo code. However, the radial dose function is different by up to 10% for the points that are notably near the source on the transversal axis because of the high-energy photons from 60Co, which causes an electronic disequilibrium at the interface between the source capsule and the liquid water for distances up to 1 cm.« less

  20. All about MAX: a male adult voxel phantom for Monte Carlo calculations in radiation protection dosimetry

    NASA Astrophysics Data System (ADS)

    Kramer, R.; Vieira, J. W.; Khoury, H. J.; Lima, F. R. A.; Fuelle, D.

    2003-05-01

    The MAX (Male Adult voXel) phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. The study describes the adjustments of the soft-tissue organ masses, a new dosimetric model for the skin, a new model for skeletal dosimetry and a computational exposure model based on coupling the MAX phantom with the EGS4 Monte Carlo code. Conversion coefficients between equivalent dose to the red bone marrow as well as effective MAX dose and air-kerma free in air for external photon irradiation from the front and from the back, respectively, are presented and compared with similar data from other human phantoms.

  1. Calculated effects of backscattering on skin dosimetry for nuclear fuel fragments.

    PubMed

    Aydarous, A Sh

    2008-01-01

    The size of hot particles contained in nuclear fallout ranges from 10 nm to 20 microm for the worldwide weapons fallout. Hot particles from nuclear power reactors can be significantly bigger (100 microm to several millimetres). Electron backscattering from such particles is a prominent secondary effect in beta dosimetry for radiological protection purposes, such as skin dosimetry. In this study, the effect of electron backscattering due to hot particles contamination on skin dose is investigated. These include parameters such as detector area, source radius, source energy, scattering material and source density. The Monte-Carlo Neutron Particle code (MCNP4C) was used to calculate the depth dose distribution for 10 different beta sources and various materials. The backscattering dose factors (BSDF) were then calculated. A significant dependence is shown for the BSDF magnitude upon detector area, source radius and scatterers. It is clearly shown that the BSDF increases with increasing detector area. For high Z scatterers, the BSDF can reach as high as 40 and 100% for sources with radii 0.1 and 0.0001 cm, respectively. The variation of BSDF with source radius, source energy and source density is discussed.

  2. Application for internal dosimetry using biokinetic distribution of photons based on nuclear medicine images.

    PubMed

    Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade

    2014-01-01

    This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity.

  3. Computer Aided Dosimetry and Verification of Exposure to Radiation

    NASA Astrophysics Data System (ADS)

    Waller, Edward; Stodilka, Robert Z.; Leach, Karen E.; Lalonde, Louise

    2002-06-01

    In the timeframe following the September 11th attacks on the United States, increased emphasis has been placed on Chemical, Biological, Radiological and Nuclear (CBRN) preparedness. Of prime importance is rapid field assessment of potential radiation exposure to Canadian Forces field personnel. This work set up a framework for generating an 'expert' computer system for aiding and assisting field personnel in determining the extent of radiation insult to military personnel. Data was gathered by review of the available literature, discussions with medical and health physics personnel having hands-on experience dealing with radiation accident victims, and from experience of the principal investigator. Flow charts and generic data fusion algorithms were developed. Relationships between known exposure parameters, patient interview and history, clinical symptoms, clinical work-ups, physical dosimetry, biological dosimetry, and dose reconstruction as critical data indicators were investigated. The data obtained was examined in terms of information theory. A main goal was to determine how best to generate an adaptive model (i.e. when more data becomes available, how is the prediction improved). Consideration was given to determination of predictive algorithms for health outcome. In addition. the concept of coding an expert medical treatment advisor system was developed (U)

  4. Solar particle events observed at Mars: dosimetry measurements and model calculations

    NASA Astrophysics Data System (ADS)

    Cleghorn, T.; Saganti, P.; Zeitlin, C.; Cucinotta, F.

    The first solar particle events from a Martian orbit are observed with the MARIE (Martian Radiation Environment Experiment) on the 2001 Mars Odyssey space -craft that is currently in orbit and collecting the mapping data of the red planet. These solar particle events observed at Mars during March and April 2002, are correlated with the GOES-8 and ACE satellite data from the same time period at Earth orbits. Dosimetry measurements for the Mars orbit from the period of March 13t h through April 29t h . Particle count rate and the corresponding dose rate enhancements were observed on March 16t h through 20t h and on April 22n d corresponding to solar particle events that were observed at Earth orbit on March 16t h through 21s t and beginning on April 21s t respectively. The model calculations with the HZETRN (High Z=atomic number and high Energy Transport) code estimated the background GCR (Galactic Cosmic Rays) dose rates. The dose rates observed by the MARIE instrument are within 10% of the model calculations. Dosimetry measurements and model calculation will be presented.

  5. Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.

    PubMed

    Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood

    2016-01-01

    Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.

  6. Development and application of a complex numerical model and software for the computation of dose conversion factors for radon progenies.

    PubMed

    Farkas, Árpád; Balásházy, Imre

    2015-04-01

    A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan, Benjamin A.

    We report on the use and design of a portable, extensible performance data collection tool motivated by modeling needs of the high performance computing systems co-design com- munity. The lightweight performance data collectors with Eiger support is intended to be a tailorable tool, not a shrink-wrapped library product, as pro ling needs vary widely. A single code markup scheme is reported which, based on compilation ags, can send perfor- mance data from parallel applications to CSV les, to an Eiger mysql database, or (in a non-database environment) to at les for later merging and loading on a host with mysqlmore » available. The tool supports C, C++, and Fortran applications.« less

  8. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored atmore » synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of {sup 252}Cf, {sup 241}AmBe and {sup 239}PuBe neutron sources measured with a Bonner spheres system.« less

  9. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called "Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres", (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the "Robust design of artificial neural networks methodology" and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252Cf, 241AmBe and 239PuBe neutron sources measured with a Bonner spheres system.

  10. Electromagnetic Gauge Study of Laser-Induced Shock Waves in Aluminium Alloys

    NASA Astrophysics Data System (ADS)

    Peyre, P.; Fabbro, R.

    1995-12-01

    The laser-shock behaviour of three industrial aluminum alloys has been analyzed with an Electromagnetic Gauge Method (EMV) for measuring the velocity of the back free surface of thin foils submitted to plane laser irradiation. Surface pressure, shock decay in depth and Hugoniot Elastic Limits (HEL) of the materials were investigated with increasing thicknesses of foils to be shocked. First, surface peak pressures values as a function of laser power density gave a good agreement with conventional piezoelectric quartz measurements. Therefore, comparison of experimental results with computer simulations, using a 1D hydrodynamic Lagrangian finite difference code, were also in good accordance. Lastly, HEL values were compared with static and dynamic compressive tests in order to estimate the effects of a very large range of strain rates (10^{-3} s^{-1} to 10^6 s^{-1}) on the mechanical properties of the alloys. Cet article fait la synthèse d'une étude récente sur la caractérisation du comportement sous choc-laser de trois alliages d'aluminium largement utilisés dans l'industrie à travers la méthode dite de la jauge électromagnétique. Cette méthode permet de mesurer les vitesses matérielles induites en face arrière de plaques d'épaisseurs variables par un impact laser. La mise en vitesse de plaques nous a permis, premièrement, de vérifier la validité des pressions d'impact superficielles obtenues en les comparant avec des résultats antérieurs obtenus par des mesures sur capteurs quartz. Sur des plaques d'épaisseurs croissantes, nous avons caractérisé l'atténuation des ondes de choc en profondeur dans les alliages étudiés et mesuré les limites d'élasticité sous choc (pressions d'Hugoniot) des alliages. Les résultats ont été comparés avec succès à des simulations numériques grâce à un code de calcul monodimensionnel Lagrangien. Enfin, les valeurs des pressions d'Hugoniot mesurées ont permis de tracer l'évolution des contraintes d'écoulement plastique en fonction de la vitesse de déformation pour des valeurs comprises entre 10^{-3} s^{-1} et 10^6 s^{-1}.

  11. Experimental check of bremsstrahlung dosimetry predictions for 0.75 MeV electrons

    NASA Astrophysics Data System (ADS)

    Sanford, T. W. L.; Halbleib, J. A.; Beezhold, W.

    Bremsstrahlung dose in CaF2 TLDs from the radiation produced by 0.75 MeV electrons incident on Ta/C targets is measured and compared with that calculated via the CYLTRAN Monte Carlo code. The comparison was made to validate the code, which is used to predict and analyze radiation environments of flash X-ray simulators measured by TLDs. Over a wide range of Ta target thicknesses and radiation angles the code is found to agree with the 5% measurements. For Ta thickness near those that optimize the radiation output, however, the code overestimates the radiation dose at small angles. Maximum overprediction is about 14 + or - 5%. The general agreement, nonetheless, gives confidence in using the code at this energy and in the TLD calibration procedure. For the bulk of the measurements, a standard TLD employing a 2.2 mm thick Al equilibrator was used. In this paper we also show that this thickness can significantly attenuate the free-field dose and introduces significant photon buildup in the equalibrator.

  12. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important steps in the process. Verification insures that the CFD code is solving the equations as intended (no errors in the implementation). This is typically done either through the method of manufactured solutions (MMS) or through careful step-by-step comparisons with other verified codes. After the verification step is concluded, validation is performed to document the ability of the turbulence model to represent different types of flow physics. Validation can involve a large number of test case comparisons with experiments, theory, or DNS. Organized workshops have proved to be valuable resources for the turbulence modeling community in its pursuit of turbulence modeling verification and validation. Workshop contributors using different CFD codes run the same cases, often according to strict guidelines, and compare results. Through these comparisons, it is often possible to (1) identify codes that have likely implementation errors, and (2) gain insight into the capabilities and shortcomings of different turbulence models to predict the flow physics associated with particular types of flows. These are valuable lessons because they help bring consistency to CFD codes by encouraging the correction of faulty programming and facilitating the adoption of better models. They also sometimes point to specific areas needed for improvement in the models. In this paper, several recent workshops are summarized primarily from the point of view of turbulence modeling verification and validation. Furthermore, the NASA Langley Turbulence Modeling Resource website is described. The purpose of this site is to provide a central location where RANS turbulence models are documented, and test cases, grids, and data are provided. The goal of this paper is to provide an abbreviated survey of turbulence modeling verification and validation efforts, summarize some of the outcomes, and give some ideas for future endeavors in this area.

  13. Identification d'une loi thermo-élasto-viscoplastique en vue de la modélisation du laminage à chaud du cuivre

    NASA Astrophysics Data System (ADS)

    Moureaux, P.; Moto Mpong, S.; Remy, M.; Bouffioux, C.; Lecomte-Beckers, J.; Habraken, A. M.

    2002-12-01

    la mise au point d'un modèle de simulation de la dernière passe du laminage à chaud du cuivre ne présente à priori pas de problème du point de vue numérique pour un code d'éléments finis non linéaire. La collecte d'informations précises tant sur le procédé industriel que sur le comportement du matériau est par contre une opération non triviale. Cet article présente les diverses méthodes expérimentales mises en œuvre pour caractériser le matériau : essais de compression à chaud, mesures d'analyse thermique différentielle, essais de dilatométrie et de diffusivité. Les méthodes permettant d'identifier les paramètres de la loi élasto-visco-plastique de type Norton-Hoff à partir des essais sont présentées et une analyse bibliographique investigue le problème de la détermination du module de Young à haute température. Tant les hypothèses supplémentaires relatives au procédé et nécessaires au modèle que les résultats finaux sont résumés.

  14. Unstructured LES of Reacting Multiphase Flows in Realistic Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Ham, Frank; Apte, Sourabh; Iaccarino, Gianluca; Wu, Xiao-Hua; Herrmann, Marcus; Constantinescu, George; Mahesh, Krishnan; Moin, Parviz

    2003-01-01

    As part of the Accelerated Strategic Computing Initiative (ASCI) program, an accurate and robust simulation tool is being developed to perform high-fidelity LES studies of multiphase, multiscale turbulent reacting flows in aircraft gas turbine combustor configurations using hybrid unstructured grids. In the combustor, pressurized gas from the upstream compressor is reacted with atomized liquid fuel to produce the combustion products that drive the downstream turbine. The Large Eddy Simulation (LES) approach is used to simulate the combustor because of its demonstrated superiority over RANS in predicting turbulent mixing, which is central to combustion. This paper summarizes the accomplishments of the combustor group over the past year, concentrating mainly on the two major milestones achieved this year: 1) Large scale simulation: A major rewrite and redesign of the flagship unstructured LES code has allowed the group to perform large eddy simulations of the complete combustor geometry (all 18 injectors) with over 100 million control volumes; 2) Multi-physics simulation in complex geometry: The first multi-physics simulations including fuel spray breakup, coalescence, evaporation, and combustion are now being performed in a single periodic sector (1/18th) of an actual Pratt & Whitney combustor geometry.

  15. Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems

    NASA Technical Reports Server (NTRS)

    Morris, C. I.

    2013-01-01

    The last two decades have witnessed tremendous growth in computational power, the development of computational fluid dynamics (CFD) codes which scale well over thousands of processors, and the refinement of unstructured grid-generation tools which facilitate rapid surface and volume gridding of complex geometries. Thus, engineering calculations of 10(exp 7) - 10(exp 8) finite-volume cells have become routine for some types of problems. Although the Reynolds Averaged Navier Stokes (RANS) approach to modeling turbulence is still in extensive and wide use, increasingly large-eddy simulation (LES) and hybrid RANS-LES approaches are being applied to resolve the largest scales of turbulence in many engineering problems. However, it has also become evident that LES places different requirements on the numerical approaches for both the spatial and temporal discretization of the Navier Stokes equations than does RANS. In particular, LES requires high time accuracy and minimal intrinsic numerical dispersion and dissipation over a wide spectral range. In this paper, the performance of both central-difference and upwind-biased spatial discretizations is examined for a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem, and the turbulent channel fl ow problem.

  16. Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems

    NASA Technical Reports Server (NTRS)

    Morris, Christopher I.

    2013-01-01

    The last two decades have witnessed tremendous growth in computational power, the development of computational fluid dynamics (CFD) codes which scale well over thousands of processors, and the refinement of unstructured grid-generation tools which facilitate rapid surface and volume gridding of complex geometries. Thus, engineering calculations of 10(exp 7) - 10(exp 8) finite-volume cells have become routine for some types of problems. Although the Reynolds Averaged Navier Stokes (RANS) approach to modeling turbulence is still in extensive and wide use, increasingly large-eddy simulation (LES) and hybrid RANS-LES approaches are being applied to resolve the largest scales of turbulence in many engineering problems. However, it has also become evident that LES places different requirements on the numerical approaches for both the spatial and temporal discretization of the Navier Stokes equations than does RANS. In particular, LES requires high time accuracy and minimal intrinsic numerical dispersion and dissipation over a wide spectral range. In this paper, the performance of both central-difference and upwind-biased spatial discretizations is examined for a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem, and the turbulent channel ow problem.

  17. SU-C-201-07: Towards Clinical Cherenkov Emission Dosimetry: Stopping Power-To-Cherenkov Power Ratios and Beam Quality Specification of Clinical Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zlateva, Y; Seuntjens, J; El Naqa, I

    Purpose: We propose a Cherenkov emission (CE)-based reference dosimetry method, which in contrast to ionization chamber-based dosimetry, employs spectrum-averaged electron restricted mass collision stopping power-to-Cherenkov power ratios (SCRs), and we examine Monte Carlo-calculated SCRs and beam quality specification of clinical electron beams. Methods: The EGSnrc user code SPRRZnrc was modified to compute SCRs instead of stopping-power ratios (single medium: water; cut-off: CE threshold (observing Spencer-Attix conditions); CE power: Frank-Tamm). SCRs are calculated with BEAMnrc for realistic electron beams with nominal energies of 6–22 MeV from three Varian accelerators (TrueBeam Clinac 21EX, Clinac 2100C/D) and for mono-energetic beams of energies equalmore » to the mean electron energy at the water surface. Sources of deviation between clinical and mono-energetic SCRs are analyzed quantitatively. A universal fit for the beam-quality index R{sub 50} in terms of the depth of 50% CE C{sub 50} is carried out. Results: SCRs at reference depth are overestimated by mono-energetic values by up to 0.2% for a 6-MeV beam and underestimated by up to 2.3% for a 22-MeV beam. The variation is mainly due to the clinical beam spectrum and photon contamination. Beam angular spread has a small effect across all depths and energies. The influence of the electron spectrum becomes increasingly significant at large depths, while at shallow depths and high beam energies photon contamination is predominant (up to 2.0%). The universal data fit reveals a strong linear correlation between R{sub 50} and C{sub 50} (ρ > 0.99999). Conclusion: CE is inherent to radiotherapy beams and can be detected outside the beam with available optical technologies, which makes it an ideal candidate for out-of-beam high-resolution 3D dosimetry. Successful clinical implementation of CE dosimetry hinges on the development of robust protocols for converting measured CE to radiation dose. Our findings constitute a key step towards clinical CE dosimetry.« less

  18. Accuracy Evaluation of Oncentra™ TPS in HDR Brachytherapy of Nasopharynx Cancer Using EGSnrc Monte Carlo Code.

    PubMed

    Hadad, K; Zohrevand, M; Faghihi, R; Sedighi Pashaki, A

    2015-03-01

    HDR brachytherapy is one of the commonest methods of nasopharyngeal cancer treatment. In this method, depending on how advanced one tumor is, 2 to 6 Gy dose as intracavitary brachytherapy is prescribed. Due to high dose rate and tumor location, accuracy evaluation of treatment planning system (TPS) is particularly important. Common methods used in TPS dosimetry are based on computations in a homogeneous phantom. Heterogeneous phantoms, especially patient-specific voxel phantoms can increase dosimetric accuracy. In this study, using CT images taken from a patient and ctcreate-which is a part of the DOSXYZnrc computational code, patient-specific phantom was made. Dose distribution was plotted by DOSXYZnrc and compared with TPS one. Also, by extracting the voxels absorbed dose in treatment volume, dose-volume histograms (DVH) was plotted and compared with Oncentra™ TPS DVHs. The results from calculations were compared with data from Oncentra™ treatment planning system and it was observed that TPS calculation predicts lower dose in areas near the source, and higher dose in areas far from the source relative to MC code. Absorbed dose values in the voxels also showed that TPS reports D90 value is 40% higher than the Monte Carlo method. Today, most treatment planning systems use TG-43 protocol. This protocol may results in errors such as neglecting tissue heterogeneity, scattered radiation as well as applicator attenuation. Due to these errors, AAPM emphasized departing from TG-43 protocol and approaching new brachytherapy protocol TG-186 in which patient-specific phantom is used and heterogeneities are affected in dosimetry.

  19. Accuracy Evaluation of Oncentra™ TPS in HDR Brachytherapy of Nasopharynx Cancer Using EGSnrc Monte Carlo Code

    PubMed Central

    Hadad, K.; Zohrevand, M.; Faghihi, R.; Sedighi Pashaki, A.

    2015-01-01

    Background HDR brachytherapy is one of the commonest methods of nasopharyngeal cancer treatment. In this method, depending on how advanced one tumor is, 2 to 6 Gy dose as intracavitary brachytherapy is prescribed. Due to high dose rate and tumor location, accuracy evaluation of treatment planning system (TPS) is particularly important. Common methods used in TPS dosimetry are based on computations in a homogeneous phantom. Heterogeneous phantoms, especially patient-specific voxel phantoms can increase dosimetric accuracy. Materials and Methods In this study, using CT images taken from a patient and ctcreate-which is a part of the DOSXYZnrc computational code, patient-specific phantom was made. Dose distribution was plotted by DOSXYZnrc and compared with TPS one. Also, by extracting the voxels absorbed dose in treatment volume, dose-volume histograms (DVH) was plotted and compared with Oncentra™ TPS DVHs. Results The results from calculations were compared with data from Oncentra™ treatment planning system and it was observed that TPS calculation predicts lower dose in areas near the source, and higher dose in areas far from the source relative to MC code. Absorbed dose values in the voxels also showed that TPS reports D90 value is 40% higher than the Monte Carlo method. Conclusion Today, most treatment planning systems use TG-43 protocol. This protocol may results in errors such as neglecting tissue heterogeneity, scattered radiation as well as applicator attenuation. Due to these errors, AAPM emphasized departing from TG-43 protocol and approaching new brachytherapy protocol TG-186 in which patient-specific phantom is used and heterogeneities are affected in dosimetry. PMID:25973408

  20. The radiation dosimetry of intrathecally administered radionuclides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stabin, M.G.; Evans, J.F.

    The radiation dose to the spine, spinal cord, marrow, and other organs of the body from intrathecal administration of several radiopharmaceuticals was studied. Anatomic models were developed for the spine, spinal cerebrospinal fluid (CSF), spinal cord, spinal skeleton, cranial skeleton, and cranial CSF. A kinetic model for the transport of CSF was used to determine residence times in the CSF; material leaving the CSF was thereafter assumed to enter the bloodstream and follow the kinetics of the radiopharmaceutical as if intravenously administered. The radiation transport codes MCNP and ALGAMP were used to model the electron and photon transport and energymore » deposition. The dosimetry of Tc-99m DTPA and HSA, In-111 DTPA, I-131 HSA, and Yb-169 DTPA was studied. Radiation dose profiles for the spinal cord and marrow in the spine were developed and average doses to all other organs were estimated, including dose distributions within the bone and marrow.« less

  1. Characterization of a fiber-coupled Al2O3:C luminescence dosimetry system for online in vivo dose verification during 192Ir brachytherapy.

    PubMed

    Andersen, Claus E; Nielsen, Søren Kynde; Greilich, Steffen; Helt-Hansen, Jakob; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-03-01

    A prototype of a new dose-verification system has been developed to facilitate prevention and identification of dose delivery errors in remotely afterloaded brachytherapy. The system allows for automatic online in vivo dosimetry directly in the tumor region using small passive detector probes that fit into applicators such as standard needles or catheters. The system measures the absorbed dose rate (0.1 s time resolution) and total absorbed dose on the basis of radioluminescence (RL) and optically stimulated luminescence (OSL) from aluminum oxide crystals attached to optical fiber cables (1 mm outer diameter). The system was tested in the range from 0 to 4 Gy using a solid-water phantom, a Varian GammaMed Plus 192Ir PDR afterloader, and dosimetry probes inserted into stainless-steel brachytherapy needles. The calibrated system was found to be linear in the tested dose range. The reproducibility (one standard deviation) for RL and OSL measurements was 1.3%. The measured depth-dose profiles agreed well with the theoretical expectations computed with the EGSNRC Monte Carlo code, suggesting that the energy dependence for the dosimeter probes (relative to water) is less than 6% for source-to-probe distances in the range of 2-50 mm. Under certain conditions, the RL signal could be greatly disturbed by the so-called stem signal (i.e., unwanted light generated in the fiber cable upon irradiation). The OSL signal is not subject to this source of error. The tested system appears to be adequate for in vivo brachytherapy dosimetry.

  2. Dosimetry applications in GATE Monte Carlo toolkit.

    PubMed

    Papadimitroulas, Panagiotis

    2017-09-01

    Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir.

    PubMed

    Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-02-01

    The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.

  4. Partners in Quality: Tools for Practitioners in Child Care Settings. Standards of Practice, Code of Ethics, Guide to Self-Reflection = Partenaires pour la qualite: Outils pour les intervenantes des divers milieux de garde d'enfants. Normes de pratique, Code de deontologie, Guide d'introspection.

    ERIC Educational Resources Information Center

    Doherty, Gillian

    Partners in Quality is a research and development project sponsored by the Canadian Child Care Federation and its affiliates to explore how child care providers, parents, and other partners can work together to support and improve quality in child care. This booklet, in both English and French, supplements a series to support child care providers…

  5. Symposium on Signal and Image Processing English-Language Abstracts (12th) Held in Juan-Les-Pins, France on 12-16 June 1989

    DTIC Science & Technology

    1989-12-01

    Neril1" (’)INFOCOM Dpt., via Eudossiana 18, 1-00184 Roma, Italy (2) CONTRAVES Italiana SpA. via Affile 102, 1-00139 Roma, Italy SUMMARY The paper...central processor. This makes the perception of the system less accurate and induces a loss in performance. Previous studies have considered the case...current practice. An inner code, often decoded using a weighted input, is concatenated with an outer code decoded without such a weighting . If

  6. Large-eddy and unsteady RANS simulations of a shock-accelerated heavy gas cylinder

    DOE PAGES

    Morgan, B. E.; Greenough, J. A.

    2015-04-08

    Two-dimensional numerical simulations of the Richtmyer–Meshkov unstable “shock-jet” problem are conducted using both large-eddy simulation (LES) and unsteady Reynolds-averaged Navier–Stokes (URANS) approaches in an arbitrary Lagrangian–Eulerian hydrodynamics code. Turbulence statistics are extracted from LES by running an ensemble of simulations with multimode perturbations to the initial conditions. Detailed grid convergence studies are conducted, and LES results are found to agree well with both experiment and high-order simulations conducted by Shankar et al. (Phys Fluids 23, 024102, 2011). URANS results using a k–L approach are found to be highly sensitive to initialization of the turbulence lengthscale L and to the timemore » at which L becomes resolved on the computational mesh. As a result, it is observed that a gradient diffusion closure for turbulent species flux is a poor approximation at early times, and a new closure based on the mass-flux velocity is proposed for low-Reynolds-number mixing.« less

  7. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  8. Radiofrequency Radiation Dosimetry Handbook. 4th Edition

    DTIC Science & Technology

    1986-10-01

    reasonable. Such an equivalence was demon- strated by -Nielsen and Nielsen (1965) when they measured identical thermoregu- latory responses to exercise ...Circulatory and sweating responses during exercise and heat stress, pp. 251-276. In E. R. Adair (ed.). Microwavcs and Thermoregula- tion. ISBN:0-12-044020-2... RESPONSIBLE INDIVIDUAL 22b. TELEPHONE (Include Area Code) 22c. OFFICE SYMBOL Y.Williami D. Hurt ( 512) 536-20 USAFSAM/RZP DD FORM 1473, 84 MAR 83 APR

  9. Toward a New Evaluation of Neutron Standards

    DOE PAGES

    Carlson, Allan D.; Pronyaev, Vladimir G.; Capote, Roberto; ...

    2016-02-03

    Measurements related to neutron cross section standards and certain prompt neutron fission spectra are being evaluated. In addition to the standard cross sections, investigations of reference data that are not as well known as the standards are being considered. We discuss procedures and codes for performing this work. A number of libraries will use the results of this standards evaluation for new versions of their libraries. Most of these data have applications in neutron dosimetry.

  10. Application for internal dosimetry using biokinetic distribution of photons based on nuclear medicine images*

    PubMed Central

    Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade

    2014-01-01

    Objective This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. Materials and Methods A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. Results With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. Conclusion The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity. PMID:25741101

  11. Validating Fricke dosimetry for the measurement of absorbed dose to water for HDR 192Ir brachytherapy: a comparison between primary standards of the LCR, Brazil, and the NRC, Canada.

    PubMed

    Salata, Camila; David, Mariano Gazineu; de Almeida, Carlos Eduardo; El Gamal, Islam; Cojocaru, Claudiu; Mainegra-Hing, Ernesto; McEwen, Malcom

    2018-04-05

    Two Fricke-based absorbed dose to water standards for HDR Ir-192 dosimetry, developed independently by the LCR in Brazil and the NRC in Canada have been compared. The agreement in the determination of the dose rate from a HDR Ir-192 source at 1 cm in a water phantom was found to be within the k  =  1 combined measurement uncertainties of the two standards: D NRC /D LCR   =  1.011, standard uncertainty  =  2.2%. The dose-based standards also agreed within the uncertainties with the manufacturer's stated dose rate value, which is traceable to a national standard of air kerma. A number of possible influence quantities were investigated, including the specific method for producing the ferrous-sulphate Fricke solution, the geometry of the holder, and the Monte Carlo code used to determine correction factors. The comparison highlighted the lack of data on the determination of G(Fe 3+ ) in this energy range and the possibilities for further development of the holders used to contain the Fricke solution. The comparison also confirmed the suitability of Fricke dosimetry for Ir-192 primary standard dose rate determinations at therapy dose levels.

  12. Internal dosimetry with the Monte Carlo code GATE: validation using the ICRP/ICRU female reference computational model

    NASA Astrophysics Data System (ADS)

    Villoing, Daphnée; Marcatili, Sara; Garcia, Marie-Paule; Bardiès, Manuel

    2017-03-01

    The purpose of this work was to validate GATE-based clinical scale absorbed dose calculations in nuclear medicine dosimetry. GATE (version 6.2) and MCNPX (version 2.7.a) were used to derive dosimetric parameters (absorbed fractions, specific absorbed fractions and S-values) for the reference female computational model proposed by the International Commission on Radiological Protection in ICRP report 110. Monoenergetic photons and electrons (from 50 keV to 2 MeV) and four isotopes currently used in nuclear medicine (fluorine-18, lutetium-177, iodine-131 and yttrium-90) were investigated. Absorbed fractions, specific absorbed fractions and S-values were generated with GATE and MCNPX for 12 regions of interest in the ICRP 110 female computational model, thereby leading to 144 source/target pair configurations. Relative differences between GATE and MCNPX obtained in specific configurations (self-irradiation or cross-irradiation) are presented. Relative differences in absorbed fractions, specific absorbed fractions or S-values are below 10%, and in most cases less than 5%. Dosimetric results generated with GATE for the 12 volumes of interest are available as supplemental data. GATE can be safely used for radiopharmaceutical dosimetry at the clinical scale. This makes GATE a viable option for Monte Carlo modelling of both imaging and absorbed dose in nuclear medicine.

  13. Validating Fricke dosimetry for the measurement of absorbed dose to water for HDR 192Ir brachytherapy: a comparison between primary standards of the LCR, Brazil, and the NRC, Canada

    NASA Astrophysics Data System (ADS)

    Salata, Camila; Gazineu David, Mariano; de Almeida, Carlos Eduardo; El Gamal, Islam; Cojocaru, Claudiu; Mainegra-Hing, Ernesto; McEwen, Malcom

    2018-04-01

    Two Fricke-based absorbed dose to water standards for HDR Ir-192 dosimetry, developed independently by the LCR in Brazil and the NRC in Canada have been compared. The agreement in the determination of the dose rate from a HDR Ir-192 source at 1 cm in a water phantom was found to be within the k  =  1 combined measurement uncertainties of the two standards: D NRC/D LCR  =  1.011, standard uncertainty  =  2.2%. The dose-based standards also agreed within the uncertainties with the manufacturer’s stated dose rate value, which is traceable to a national standard of air kerma. A number of possible influence quantities were investigated, including the specific method for producing the ferrous-sulphate Fricke solution, the geometry of the holder, and the Monte Carlo code used to determine correction factors. The comparison highlighted the lack of data on the determination of G(Fe3+) in this energy range and the possibilities for further development of the holders used to contain the Fricke solution. The comparison also confirmed the suitability of Fricke dosimetry for Ir-192 primary standard dose rate determinations at therapy dose levels.

  14. Large Eddy Simulation of Spatially Developing Turbulent Reacting Shear Layers with the One-Dimensional Turbulence Model

    NASA Astrophysics Data System (ADS)

    Hoffie, Andreas Frank

    Large eddy simulation (LES) combined with the one-dimensional turbulence (ODT) model is used to simulate spatially developing turbulent reacting shear layers with high heat release and high Reynolds numbers. The LES-ODT results are compared to results from direct numerical simulations (DNS), for model development and validation purposes. The LES-ODT approach is based on LES solutions for momentum and pressure on a coarse grid and solutions for momentum and reactive scalars on a fine, one-dimensional, but three-dimensionally coupled ODT subgrid, which is embedded into the LES computational domain. Although one-dimensional, all three velocity components are transported along the ODT domain. The low-dimensional spatial and temporal resolution of the subgrid scales describe a new modeling paradigm, referred to as autonomous microstructure evolution (AME) models, which resolve the multiscale nature of turbulence down to the Kolmogorv scales. While this new concept aims to mimic the turbulent cascade and to reduce the number of input parameters, AME enables also regime-independent combustion modeling, capable to simulate multiphysics problems simultaneously. The LES as well as the one-dimensional transport equations are solved using an incompressible, low Mach number approximation, however the effects of heat release are accounted for through variable density computed by the ideal gas equation of state, based on temperature variations. The computations are carried out on a three-dimensional structured mesh, which is stretched in the transverse direction. While the LES momentum equation is integrated with a third-order Runge-Kutta time-integration, the time integration at the ODT level is accomplished with an explicit Forward-Euler method. Spatial finite-difference schemes of third (LES) and first (ODT) order are utilized and a fully consistent fractional-step method at the LES level is used. Turbulence closure at the LES level is achieved by utilizing the Smagorinsky model. The chemical reaction is simulated with a global single-step, second-order equilibrium reaction with an Arrhenius reaction rate. The two benchmark cases of constant density reacting and variable density non-reacting shear layers used to determine ODT parameters yield perfect agreement with regards to first and second-order flow statistics as well as shear layer growth rate. The variable density non-reacting shear layer also serves as a testing case for the LES-ODT model to simulate passive scalar mixing. The variable density, reacting shear layer cases only agree reasonably well and indicate that more work is necessary to improve variable density coupling of ODT and LES. The disagreement is attributed to the fact that the ODT filtered density is kept constant across the Runge-Kutta steps. Furthermore, a more in-depth knowledge of large scale and subgrid turbulent kinetic energy (TKE) spectra at several downstream locations as well as TKE budgets need to be studied to obtain a better understanding about the model as well as about the flow under investigation. The local Reynolds number based on the one-percent thickness at the exit is Redelta ≈ 5300, for the constant density reacting and for the variable density non-reacting case. For the variable density reacting shear layer, the Reynolds number based on the 1% thickness is Redelta ≈ 2370. The variable density reacting shear layers show suppressed growth rates due to density variations caused by heat release. This has also been reported in literature. A Lewis number parameter study is performed to extract non-unity Lewis number effects. An increase in the Lewis number leads to a further suppression of the growth rate, however to an increase spread of second-order flow statistics. Major focus and challenge of this work is to improve and advance the three-dimensional coupling of the one-dimensional ODT domains while keeping the solution correct. This entails major restructuring of the model. The turbulent reacting shear layer poses a physical challenge to the model because of its nature being a statistically stationary, non-decaying inhomogeneous and anisotropic turbulent flow. This challenge also requires additions to the eddy sampling procedure. Besides physical advancements, the LES-ODT code is also improved regarding its ability to use general cuboid geometries, an array structure that allows to apply boundary conditions based on ghost-cells and non-uniform structured meshes. The use of transverse grid-stretching requires the implementation of the ODT triplet map on a stretched grid. Further, advancing subroutine structure handling with global variables that enable serial code speed-up and parallelization with OpenMP are undertaken. Porting the code to a higher-level language, object oriented, finite-volume based CFD platform, like OpenFoam for example that allows more advanced array and parallelization features with graphics processing units (GPUs) as well as parallelization with the message passing interface (MPI) to simulate complex geometries is recommended for future work.

  15. HPC Institutional Computing Project: W15_lesreactiveflow KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrington, David Bradley; Waters, Jiajia

    KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5thmore » generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media modeling and magneto hydrodynamics.« less

  16. Lung Dosimetry for Radioiodine Treatment Planning in the Case of Diffuse Lung Metastases

    PubMed Central

    Song, Hong; He, Bin; Prideaux, Andrew; Du, Yong; Frey, Eric; Kasecamp, Wayne; Ladenson, Paul W.; Wahl, Richard L.; Sgouros, George

    2010-01-01

    The lungs are the most frequent sites of distant metastasis in differentiated thyroid carcinoma. Radioiodine treatment planning for these patients is usually performed following the Benua– Leeper method, which constrains the administered activity to 2.96 GBq (80 mCi) whole-body retention at 48 h after administration to prevent lung toxicity in the presence of iodine-avid lung metastases. This limit was derived from clinical experience, and a dosimetric analysis of lung and tumor absorbed dose would be useful to understand the implications of this limit on toxicity and tumor control. Because of highly nonuniform lung density and composition as well as the nonuniform activity distribution when the lungs contain tumor nodules, Monte Carlo dosimetry is required to estimate tumor and normal lung absorbed dose. Reassessment of this toxicity limit is also appropriate in light of the contemporary use of recombinant thyrotropin (thyroid-stimulating hormone) (rTSH) to prepare patients for radioiodine therapy. In this work we demonstrated the use of MCNP, a Monte Carlo electron and photon transport code, in a 3-dimensional (3D) imaging–based absorbed dose calculation for tumor and normal lungs. Methods A pediatric thyroid cancer patient with diffuse lung metastases was administered 37MBq of 131I after preparation with rTSH. SPECT/CT scans were performed over the chest at 27, 74, and 147 h after tracer administration. The time–activity curve for 131I in the lungs was derived from the whole-body planar imaging and compared with that obtained from the quantitative SPECT methods. Reconstructed and coregistered SPECT/CT images were converted into 3D density and activity probability maps suitable for MCNP4b input. Absorbed dose maps were calculated using electron and photon transport in MCNP4b. Administered activity was estimated on the basis of the maximum tolerated dose (MTD) of 27.25 Gy to the normal lungs. Computational efficiency of the MCNP4b code was studied with a simple segmentation approach. In addition, the Benua–Leeper method was used to estimate the recommended administered activity. The standard dosing plan was modified to account for the weight of this pediatric patient, where the 2.96-GBq (80 mCi) whole-body retention was scaled to 2.44 GBq (66 mCi) to give the same dose rate of 43.6 rad/h in the lungs at 48 h. Results Using the MCNP4b code, both the spatial dose distribution and a dose–volume histogram were obtained for the lungs. An administered activity of 1.72 GBq (46.4 mCi) delivered the putative MTD of 27.25 Gy to the lungs with a tumor absorbed dose of 63.7 Gy. Directly applying the Benua–Leeper method, an administered activity of 3.89 GBq (105.0 mCi) was obtained, resulting in tumor and lung absorbed doses of 144.2 and 61.6 Gy, respectively, when the MCNP-based dosimetry was applied. The voxel-by-voxel calculation time of 4,642.3 h for photon transport was reduced to 16.8 h when the activity maps were segmented into 20 regions. Conclusion MCNP4b–based, patient-specific 3D dosimetry is feasible and important in the dosimetry of thyroid cancer patients with avid lung metastases that exhibit prolonged retention in the lungs. PMID:17138741

  17. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  18. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  19. A hyperboliod representation of the bone-marrow interface within 3D NMR images of trabecular bone: applications to skeletal dosimetry

    NASA Astrophysics Data System (ADS)

    Rajon, D. A.; Shah, A. P.; Watchman, C. J.; Brindle, J. M.; Bolch, W. E.

    2003-06-01

    Recent advances in physical models of skeletal dosimetry utilize high-resolution NMR microscopy images of trabecular bone. These images are coupled to radiation transport codes to assess energy deposition within active bone marrow irradiated by bone- or marrow-incorporated radionuclides. Recent studies have demonstrated that the rectangular shape of image voxels is responsible for cross-region (bone-to-marrow) absorbed fraction errors of up to 50% for very low-energy electrons (<50 keV). In this study, a new hyperboloid adaptation of the marching cube (MC) image-visualization algorithm is implemented within 3D digital images of trabecular bone to better define the bone-marrow interface, and thus reduce voxel effects in the assessment of cross-region absorbed fractions. To test the method, a mathematical sample of trabecular bone was constructed, composed of a random distribution of spherical marrow cavities, and subsequently coupled to the EGSnrc radiation code to generate reference values for the energy deposition in marrow or bone. Next, digital images of the bone model were constructed over a range of simulated image resolutions, and coupled to EGSnrc using the hyperboloid MC (HMC) algorithm. For the radionuclides 33P, 117mSn, 131I and 153Sm, values of S(marrow←bone) estimated using voxel models of trabecular bone were shown to have relative errors of 10%, 9%, <1% and <1% at a voxel size of 150 µm. At a voxel size of 60 µm, these errors were 6%, 5%, <1% and <1%, respectively. When the HMC model was applied during particle transport, the relative errors on S(marrow←bone) for these same radionuclides were reduced to 7%, 6%, <1% and <1% at a voxel size of 150 µm, and to 2%, 2%, <1% and <1% at a voxel size of 60 µm. The technique was also applied to a real NMR image of human trabecular bone with a similar demonstration of reductions in dosimetry errors.

  20. Monte Carlo simulations in Nuclear Medicine

    NASA Astrophysics Data System (ADS)

    Loudos, George K.

    2007-11-01

    Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. Monte Carlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.

  1. The EGS4 Code System: Solution of Gamma-ray and Electron Transport Problems

    DOE R&D Accomplishments Database

    Nelson, W. R.; Namito, Yoshihito

    1990-03-01

    In this paper we present an overview of the EGS4 Code System -- a general purpose package for the Monte Carlo simulation of the transport of electrons and photons. During the last 10-15 years EGS has been widely used to design accelerators and detectors for high-energy physics. More recently the code has been found to be of tremendous use in medical radiation physics and dosimetry. The problem-solving capabilities of EGS4 will be demonstrated by means of a variety of practical examples. To facilitate this review, we will take advantage of a new add-on package, called SHOWGRAF, to display particle trajectories in complicated geometries. These are shown as 2-D laser pictures in the written paper and as photographic slides of a 3-D high-resolution color monitor during the oral presentation. 11 refs., 15 figs.

  2. Corps et culture: les codes de savoir-vivre (Body and Culture: The Standards of Etiquette).

    ERIC Educational Resources Information Center

    Picard, Dominique

    1983-01-01

    The evolution of values and standards of behavior as they relate to the body in culture are examined, especially in light of recent trends toward recognition of the natural and the spontaneous, the positive value placed on sexuality, and at the same time, narcissism and emphasis on youth. (MSE)

  3. Second Order Non-Linear Optical Polyphosphazenes. Proceedings of Symposium on Non-Linear Optics, ACS Meeting, Held in Boston, Massachusetts 1990

    DTIC Science & Technology

    1990-03-23

    Paciorek Dr. William B. Moniz Ultrasystems Defense and Space, Inc. Code 6120 16775 Von Karman Avenue Naval Research Laboratory Irvine, CA 92714 Washington...413h004 Dr. Les H. Sperling Dr. Richard S. Stein Materials Research Center #32 Polymer Research Institute Lehigh University University of Massachusetts

  4. Comparison of normal tissue dose calculation methods for epidemiological studies of radiotherapy patients.

    PubMed

    Mille, Matthew M; Jung, Jae Won; Lee, Choonik; Kuzmin, Gleb A; Lee, Choonsik

    2018-06-01

    Radiation dosimetry is an essential input for epidemiological studies of radiotherapy patients aimed at quantifying the dose-response relationship of late-term morbidity and mortality. Individualised organ dose must be estimated for all tissues of interest located in-field, near-field, or out-of-field. Whereas conventional measurement approaches are limited to points in water or anthropomorphic phantoms, computational approaches using patient images or human phantoms offer greater flexibility and can provide more detailed three-dimensional dose information. In the current study, we systematically compared four different dose calculation algorithms so that dosimetrists and epidemiologists can better understand the advantages and limitations of the various approaches at their disposal. The four dose calculations algorithms considered were as follows: the (1) Analytical Anisotropic Algorithm (AAA) and (2) Acuros XB algorithm (Acuros XB), as implemented in the Eclipse treatment planning system (TPS); (3) a Monte Carlo radiation transport code, EGSnrc; and (4) an accelerated Monte Carlo code, the x-ray Voxel Monte Carlo (XVMC). The four algorithms were compared in terms of their accuracy and appropriateness in the context of dose reconstruction for epidemiological investigations. Accuracy in peripheral dose was evaluated first by benchmarking the calculated dose profiles against measurements in a homogeneous water phantom. Additional simulations in a heterogeneous cylinder phantom evaluated the performance of the algorithms in the presence of tissue heterogeneity. In general, we found that the algorithms contained within the commercial TPS (AAA and Acuros XB) were fast and accurate in-field or near-field, but not acceptable out-of-field. Therefore, the TPS is best suited for epidemiological studies involving large cohorts and where the organs of interest are located in-field or partially in-field. The EGSnrc and XVMC codes showed excellent agreement with measurements both in-field and out-of-field. The EGSnrc code was the most accurate dosimetry approach, but was too slow to be used for large-scale epidemiological cohorts. The XVMC code showed similar accuracy to EGSnrc, but was significantly faster, and thus epidemiological applications seem feasible, especially when the organs of interest reside far away from the field edge.

  5. A Sub-filter Scale Noise Equation far Hybrid LES Simulations

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid LES/subscale modeling approaches have an important advantage over the current noise prediction methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence . Previous hybrid approaches use approximate statistical techniques or extrapolation methods to obtain the requisite information about the sub-filter scale motion. An alternative approach would be to adopt the modeling techniques used in the current noise prediction methods and determine the unknown stresses from experimental data. The present paper derives an equation for predicting the sub scale sound from information that can be obtained with currently available experimental procedures. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid techniques.

  6. Combined experimental and Monte Carlo verification of brachytherapy plans for vaginal applicators

    NASA Astrophysics Data System (ADS)

    Sloboda, Ron S.; Wang, Ruqing

    1998-12-01

    Dose rates in a phantom around a shielded and an unshielded vaginal applicator containing Selectron low-dose-rate sources were determined by experiment and Monte Carlo simulation. Measurements were performed with thermoluminescent dosimeters in a white polystyrene phantom using an experimental protocol geared for precision. Calculations for the same set-up were done using a version of the EGS4 Monte Carlo code system modified for brachytherapy applications into which a new combinatorial geometry package developed by Bielajew was recently incorporated. Measured dose rates agree with Monte Carlo estimates to within 5% (1 SD) for the unshielded applicator, while highlighting some experimental uncertainties for the shielded applicator. Monte Carlo calculations were also done to determine a value for the effective transmission of the shield required for clinical treatment planning, and to estimate the dose rate in water at points in axial and sagittal planes transecting the shielded applicator. Comparison with dose rates generated by the planning system indicates that agreement is better than 5% (1 SD) at most positions. The precision thermoluminescent dosimetry protocol and modified Monte Carlo code are effective complementary tools for brachytherapy applicator dosimetry.

  7. Air kerma calibration factors and chamber correction values for PTW soft x-ray, NACP and Roos ionization chambers at very low x-ray energies.

    PubMed

    Ipe, N E; Rosser, K E; Moretti, C J; Manning, J W; Palmer, M J

    2001-08-01

    This paper evaluates the characteristics of ionization chambers for the measurement of absorbed dose to water using very low-energy x-rays. The values of the chamber correction factor, k(ch), used in the IPEMB 1996 code of practice for the UK secondary standard ionization chambers (PTW type M23342 and PTW type M23344), the Roos (PTW type 34001) and NACP electron chambers are derived. The responses in air of the small and large soft x-ray chambers (PTW type M23342 and PTW type M23344) and the NACP and Roos electron ionization chambers were compared. Besides the soft x-ray chambers, the NACP and Roos chambers can be used for very low-energy x-ray dosimetry provided that they are used in the restricted energy range for which their response does not change by more than 5%. The chamber correction factor was found by comparing the absorbed dose to water determined using the dosimetry protocol recommended for low-energy x-rays with that for very low-energy x-rays. The overlap energy range was extended using data from Grosswendt and Knight. Chamber correction factors given in this paper are chamber dependent, varying from 1.037 to 1.066 for a PTW type M23344 chamber, which is very different from a value of unity given in the IPEMB code. However, the values of k(ch) determined in this paper agree with those given in the DIN standard within experimental uncertainty. The authors recommend that the very low-energy section of the IPEMB code is amended to include the most up-to-date values of k(ch).

  8. Development of a patient-specific dosimetry estimation system in nuclear medicine examination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, H. H.; Dong, S. L.; Yang, H. J.

    2011-07-01

    The purpose of this study is to develop a patient-specific dosimetry estimation system in nuclear medicine examination using a SimSET-based Monte Carlo code. We added a dose deposition routine to store the deposited energy of the photons during their flights in SimSET and developed a user-friendly interface for reading PET and CT images. Dose calculated on ORNL phantom was used to validate the accuracy of this system. The S values for {sup 99m}Tc, {sup 18}F and {sup 131}I obtained by the system were compared to those from the MCNP4C code and OLINDA. The ratios of S values computed by thismore » system to those obtained with OLINDA for various organs were ranged from 0.93 to 1.18, which are comparable to that obtained from MCNP4C code (0.94 to 1.20). The average ratios of S value were 0.99{+-}0.04, 1.03{+-}0.05, and 1.00{+-}0.07 for isotopes {sup 131}I, {sup 18}F, and {sup 99m}Tc, respectively. The simulation time of SimSET was two times faster than MCNP4C's for various isotopes. A 3D dose calculation was also performed on a patient data set with PET/CT examination using this system. Results from the patient data showed that the estimated S values using this system differed slightly from those of OLINDA for ORNL phantom. In conclusion, this system can generate patient-specific dose distribution and display the isodose curves on top of the anatomic structure through a friendly graphic user interface. It may also provide a useful tool to establish an appropriate dose-reduction strategy to patients in nuclear medicine environments. (authors)« less

  9. DNS and LES of a Shear-Free Mixing Layer

    NASA Technical Reports Server (NTRS)

    Knaepen, B.; Debliquy, O.; Carati, D.

    2003-01-01

    The purpose of this work is twofold. First, given the computational resources available today, it is possible to reach, using DNS, higher Reynolds numbers than in Briggs et al.. In the present study, the microscale Reynolds numbers reached in the low- and high-energy homogeneous regions are, respectively, 32 and 69. The results reported earlier can thus be complemented and their robustness in the presence of increased turbulence studied. The second aim of this work is to perform a detailed and documented LES of the shear-free mixing layer. In that respect, the creation of a DNS database at higher Reynolds number is necessary in order to make meaningful LES assessments. From the point of view of LES, the shear-free mixing-layer is interesting since it allows one to test how traditional LES models perform in the presence of an inhomogeneity without having to deal with difficult numerical issues. Indeed, as argued in Briggs et al., it is possible to use a spectral code to study the shear-free mixing layer and one can thus focus on the accuracy of the modelling while avoiding contamination of the results by commutation errors etc. This paper is organized as follows. First we detail the initialization procedure used in the simulation. Since the flow is not statistically stationary, this initialization procedure has a fairly strong influence on the evolution. Although we will focus here on the shear-free mixing layer, the method proposed in the present work can easily be used for other flows with one inhomogeneous direction. The next section of the article is devoted to the description of the DNS. All the relevant parameters are listed and comparison with the Veeravalli & Warhaft experiment is performed. The section on the LES of the shear-free mixing layer follows. A detailed comparison between the filtered DNS data and the LES predictions is presented. It is shown that simple eddy viscosity models perform very well for the present test case, most probably because the flow seems to be almost isotropic in the small-scale range that is not resolved by the LES.

  10. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir

    PubMed Central

    Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-01-01

    Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061

  11. Comparison of parameters affecting GNP-loaded choroidal melanoma dosimetry; Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Sharabiani, Marjan; Asadi, Somayeh; Barghi, Amir Rahnamai; Vaezzadeh, Mehdi

    2018-04-01

    The current study reports the results of tumor dosimetry in the presence of gold nanoparticles (GNPs) with different sizes and concentrations. Due to limited number of works carried out on the brachytherapy of choroidal melanoma in combination with GNPs, this study was performed to determine the optimum size and concentration for GNPs which contributes the highest dose deposition in tumor region, using two phantom test cases namely water phantom and a full Monte Carlo model of human eye. Both water and human eye phantoms were simulated with MCNP5 code. Tumor dosimetry was performed for a typical point photon source with an energy of 0.38 MeV as a high energy source and 103Pd brachytherapy source with an average energy of 0.021 MeV as a low energy source in water phantom and eye phantom respectively. Such a dosimetry was done for different sizes and concentrations of GNPs. For all of the diameters, increase in concentration of GNPs resulted in an increase in dose deposited in the region of interest. In a certain concentration, GNPs with larger diameters contributed more dose to the tumor region, which was more pronounced using eye phantom. 100 nm was reported as the optimum size in order to achieve the highest energy deposition within the target. This work investigated the optimum parameters affecting macroscopic dose enhancement in GNP-aided brachytherapy of choroidal melanoma. The current work also had implications on using low energy photon sources in the presence of GNPs to acquire the highest dose enhancement. This study is conducted through four different sizes and concentrations of GNPs. Considering the sensitivity of human eye tissue, in order to report the precise optimum parameters affecting radiosensitivity, a comprehensive study on a wide range of sizes and concentrations are required.

  12. Implementing Shared Memory Parallelism in MCBEND

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Long, David; Dobson, Geoff

    2017-09-01

    MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  13. Boundary Electron and Beta Dosimetry-Quantification of the Effects of Dissimilar Media on Absorbed Dose

    NASA Astrophysics Data System (ADS)

    Nunes, Josane C.

    1991-02-01

    This work quantifies the changes effected in electron absorbed dose to a soft-tissue equivalent medium when part of this medium is replaced by a material that is not soft -tissue equivalent. That is, heterogeneous dosimetry is addressed. Radionuclides which emit beta particles are the electron sources of primary interest. They are used in brachytherapy and in nuclear medicine: for example, beta -ray applicators made with strontium-90 are employed in certain ophthalmic treatments and iodine-131 is used to test thyroid function. More recent medical procedures under development and which involve beta radionuclides include radioimmunotherapy and radiation synovectomy; the first is a cancer modality and the second deals with the treatment of rheumatoid arthritis. In addition, the possibility of skin surface contamination exists whenever there is handling of radioactive material. Determination of absorbed doses in the examples of the preceding paragraph requires considering boundaries of interfaces. Whilst the Monte Carlo method can be applied to boundary calculations, for routine work such as in clinical situations, or in other circumstances where doses need to be determined quickly, analytical dosimetry would be invaluable. Unfortunately, few analytical methods for boundary beta dosimetry exist. Furthermore, the accuracy of results from both Monte Carlo and analytical methods has to be assessed. Although restricted to one radionuclide, phosphorus -32, the experimental data obtained in this work serve several purposes, one of which is to provide standards against which calculated results can be tested. The experimental data also contribute to the relatively sparse set of published boundary dosimetry data. At the same time, they may be useful in developing analytical boundary dosimetry methodology. The first application of the experimental data is demonstrated. Results from two Monte Carlo codes and two analytical methods, which were developed elsewhere, are compared with experimental data. Monte Carlo results compare satisfactory with experimental results for the boundaries considered. The agreement with experimental results for air interfaces is of particular interest because of discrepancies reported previously by another investigator who used data obtained from a different experimental technique. Results from one of the analytical methods differ significantly from the experimental data obtained here. The second analytical method provided data which approximate experimental results to within 30%. This is encouraging but it remains to be determined whether this method performs equally well for other source energies.

  14. Radioactive decay data tables: A handbook of decay data for application to radiation dosimetry and radiological assessments

    NASA Astrophysics Data System (ADS)

    Kocher, D. C.; Smith, J. S.

    Decay data are presented for approximately 500 radionuclides including those occurring naturally in the environment, those of potential importance in routine or accidental releases from the nuclear fuel cycle, those of current interest in nuclear medicine and fusion reactor technology, and some of those of interest to Committee 2 of the International Commission on Radiological Protection for the estimation of annual limits on intake via inhalation and ingestion for occupationally exposed individuals. Physical processes involved in radioactive decay which produce the different types of radiation observed, methods used to prepare the decay data sets for each radionuclide in the format of the computerized evaluated nuclear structure data file, the tables of radioactive decay data, and the computer code MEDLIST used to produce the tables are described. Applications of the data to problems of interest in radiation dosimetry and radiological assessments are considered as well as the calculations of the activity of a daughter radionuclide relative to the activity of its parent in a radioactive decay chain.

  15. Comparison between the TRS-398 code of practice and the TG-51 dosimetry protocol for flattening filter free beams

    NASA Astrophysics Data System (ADS)

    Lye, J. E.; Butler, D. J.; Oliver, C. P.; Alves, A.; Lehmann, J.; Gibbons, F. P.; Williams, I. M.

    2016-07-01

    Dosimetry protocols for external beam radiotherapy currently in use, such as the IAEA TRS-398 and AAPM TG-51, were written for conventional linear accelerators. In these accelerators, a flattening filter is used to produce a beam which is uniform at water depths where the ionization chamber is used to measure the absorbed dose. Recently, clinical linacs have been implemented without the flattening filter, and published theoretical analysis suggested that with these beams a dosimetric error of order 0.6% could be expected for IAEA TRS-398, because the TPR20,10 beam quality index does not accurately predict the stopping power ratio (water to air) for the softer flattening-filter-free (FFF) beam spectra. We measured doses on eleven FFF linacs at 6 MV and 10 MV using both dosimetry protocols and found average differences of 0.2% or less. The expected shift due to stopping powers was not observed. We present Monte Carlo k Q calculations which show a much smaller difference between FFF and flattened beams than originally predicted. These results are explained by the inclusion of the added backscatter plates and build-up filters used in modern clinical FFF linacs, compared to a Monte Carlo model of an FFF linac in which the flattening filter is removed and no additional build-up or backscatter plate is added.

  16. PAB3D: Its History in the Use of Turbulence Models in the Simulation of Jet and Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Pao, S. Paul; Hunter, Craig A.; Deere, Karen A.; Massey, Steven J.; Elmiligui, Alaa

    2006-01-01

    This is a review paper for PAB3D s history in the implementation of turbulence models for simulating jet and nozzle flows. We describe different turbulence models used in the simulation of subsonic and supersonic jet and nozzle flows. The time-averaged simulations use modified linear or nonlinear two-equation models to account for supersonic flow as well as high temperature mixing. Two multiscale-type turbulence models are used for unsteady flow simulations. These models require modifications to the Reynolds Averaged Navier-Stokes (RANS) equations. The first scheme is a hybrid RANS/LES model utilizing the two-equation (k-epsilon) model with a RANS/LES transition function, dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier-Stokes (PANS) formulation. All of these models are implemented in the three-dimensional Navier-Stokes code PAB3D. This paper discusses computational methods, code implementation, computed results for a wide range of nozzle configurations at various operating conditions, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions.

  17. Numerics and subgrid-scale modeling in large eddy simulations of stratocumulus clouds.

    PubMed

    Pressel, Kyle G; Mishra, Siddhartha; Schneider, Tapio; Kaul, Colleen M; Tan, Zhihong

    2017-06-01

    Stratocumulus clouds are the most common type of boundary layer cloud; their radiative effects strongly modulate climate. Large eddy simulations (LES) of stratocumulus clouds often struggle to maintain fidelity to observations because of the sharp gradients occurring at the entrainment interfacial layer at the cloud top. The challenge posed to LES by stratocumulus clouds is evident in the wide range of solutions found in the LES intercomparison based on the DYCOMS-II field campaign, where simulated liquid water paths for identical initial and boundary conditions varied by a factor of nearly 12. Here we revisit the DYCOMS-II RF01 case and show that the wide range of previous LES results can be realized in a single LES code by varying only the numerical treatment of the equations of motion and the nature of subgrid-scale (SGS) closures. The simulations that maintain the greatest fidelity to DYCOMS-II observations are identified. The results show that using weighted essentially non-oscillatory (WENO) numerics for all resolved advective terms and no explicit SGS closure consistently produces the highest-fidelity simulations. This suggests that the numerical dissipation inherent in WENO schemes functions as a high-quality, implicit SGS closure for this stratocumulus case. Conversely, using oscillatory centered difference numerical schemes for momentum advection, WENO numerics for scalars, and explicitly modeled SGS fluxes consistently produces the lowest-fidelity simulations. We attribute this to the production of anomalously large SGS fluxes near the cloud tops through the interaction of numerical error in the momentum field with the scalar SGS model.

  18. Numerics and subgrid‐scale modeling in large eddy simulations of stratocumulus clouds

    PubMed Central

    Mishra, Siddhartha; Schneider, Tapio; Kaul, Colleen M.; Tan, Zhihong

    2017-01-01

    Abstract Stratocumulus clouds are the most common type of boundary layer cloud; their radiative effects strongly modulate climate. Large eddy simulations (LES) of stratocumulus clouds often struggle to maintain fidelity to observations because of the sharp gradients occurring at the entrainment interfacial layer at the cloud top. The challenge posed to LES by stratocumulus clouds is evident in the wide range of solutions found in the LES intercomparison based on the DYCOMS‐II field campaign, where simulated liquid water paths for identical initial and boundary conditions varied by a factor of nearly 12. Here we revisit the DYCOMS‐II RF01 case and show that the wide range of previous LES results can be realized in a single LES code by varying only the numerical treatment of the equations of motion and the nature of subgrid‐scale (SGS) closures. The simulations that maintain the greatest fidelity to DYCOMS‐II observations are identified. The results show that using weighted essentially non‐oscillatory (WENO) numerics for all resolved advective terms and no explicit SGS closure consistently produces the highest‐fidelity simulations. This suggests that the numerical dissipation inherent in WENO schemes functions as a high‐quality, implicit SGS closure for this stratocumulus case. Conversely, using oscillatory centered difference numerical schemes for momentum advection, WENO numerics for scalars, and explicitly modeled SGS fluxes consistently produces the lowest‐fidelity simulations. We attribute this to the production of anomalously large SGS fluxes near the cloud tops through the interaction of numerical error in the momentum field with the scalar SGS model. PMID:28943997

  19. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    NASA Astrophysics Data System (ADS)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator workbook for use in correcting the measured activities. Output from the SigPhi Calculator is automatically produced, and consists of a portion of the STAYSL PNNL input file data that is required to run the spectral adjustment calculations. Within STAYSL PNNL, the least-squares process is performed in one step, without iteration, and provides rapid results on PC platforms. STAYSL PNNL creates multiple output files with tabulated results, data suitable for plotting, and data formatted for use in subsequent radiation damage calculations using the SPECTER computer code (which is not included in the STAYSL PNNL suite). All components of the software suite have undergone extensive testing and validation prior to release and test cases are provided with the package.

  20. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong, E-mail: yidong.xia@inl.gov; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.« less

  1. Effect of respiratory motion on internal radiation dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Tianwu; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch; Geneva Neuroscience Center, Geneva University, Geneva CH-1205

    Purpose: Estimation of the radiation dose to internal organs is essential for the assessment of radiation risks and benefits to patients undergoing diagnostic and therapeutic nuclear medicine procedures including PET. Respiratory motion induces notable internal organ displacement, which influences the absorbed dose for external exposure to radiation. However, to their knowledge, the effect of respiratory motion on internal radiation dosimetry has never been reported before. Methods: Thirteen computational models representing the adult male at different respiratory phases corresponding to the normal respiratory cycle were generated from the 4D dynamic XCAT phantom. Monte Carlo calculations were performed using the MCNP transportmore » code to estimate the specific absorbed fractions (SAFs) of monoenergetic photons/electrons, the S-values of common positron-emitting radionuclides (C-11, N-13, O-15, F-18, Cu-64, Ga-68, Rb-82, Y-86, and I-124), and the absorbed dose of {sup 18}F-fluorodeoxyglucose ({sup 18}F-FDG) in 28 target regions for both the static (average of dynamic frames) and dynamic phantoms. Results: The self-absorbed dose for most organs/tissues is only slightly influenced by respiratory motion. However, for the lung, the self-absorbed SAF is about 11.5% higher at the peak exhale phase than the peak inhale phase for photon energies above 50 keV. The cross-absorbed dose is obviously affected by respiratory motion for many combinations of source-target pairs. The cross-absorbed S-values for the heart contents irradiating the lung are about 7.5% higher in the peak exhale phase than the peak inhale phase for different positron-emitting radionuclides. For {sup 18}F-FDG, organ absorbed doses are less influenced by respiratory motion. Conclusions: Respiration-induced volume variations of the lungs and the repositioning of internal organs affect the self-absorbed dose of the lungs and cross-absorbed dose between organs in internal radiation dosimetry. The dynamic anatomical model provides more accurate internal radiation dosimetry estimates for the lungs and abdominal organs based on realistic modeling of respiratory motion. This work also contributes to a better understanding of model-induced uncertainties in internal radiation dosimetry.« less

  2. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  3. a Dosimetry Assessment for the Core Restraint of AN Advanced Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Thornton, D. A.; Allen, D. A.; Tyrrell, R. J.; Meese, T. C.; Huggon, A. P.; Whiley, G. S.; Mossop, J. R.

    2009-08-01

    This paper describes calculations of neutron damage rates within the core restraint structures of Advanced Gas Cooled Reactors (AGRs). Using advanced features of the Monte Carlo radiation transport code MCBEND, and neutron source data from core follow calculations performed with the reactor physics code PANTHER, a detailed model of the reactor cores of two of British Energy's AGR power plants has been developed for this purpose. Because there are no relevant neutron fluence measurements directly supporting this assessment, results of benchmark comparisons and successful validation of MCBEND for Magnox reactors have been used to estimate systematic and random uncertainties on the predictions. In particular, it has been necessary to address the known under-prediction of lower energy fast neutron responses associated with the penetration of large thicknesses of graphite.

  4. Measurements and simulations of the radiation exposure to aircraft crew workplaces due to cosmic radiation in the atmosphere.

    PubMed

    Beck, P; Latocha, M; Dorman, L; Pelliccioni, M; Rollet, S

    2007-01-01

    As required by the European Directive 96/29/Euratom, radiation exposure due to natural ionizing radiation has to be taken into account at workplaces if the effective dose could become more than 1 mSv per year. An example of workers concerned by this directive is aircraft crew due to cosmic radiation exposure in the atmosphere. Extensive measurement campaigns on board aircrafts have been carried out to assess ambient dose equivalent. A consortium of European dosimetry institutes within EURADOS WG5 summarized experimental data and results of calculations, together with detailed descriptions of the methods for measurements and calculations. The radiation protection quantity of interest is the effective dose, E (ISO). The comparison of results by measurements and calculations is done in terms of the operational quantity ambient dose equivalent, H(10). This paper gives an overview of the EURADOS Aircraft Crew In-Flight Database and it presents a new empirical model describing fitting functions for this data. Furthermore, it describes numerical simulations performed with the Monte Carlo code FLUKA-2005 using an updated version of the cosmic radiation primary spectra. The ratio between ambient dose equivalent and effective dose at commercial flight altitudes, calculated with FLUKA-2005, is discussed. Finally, it presents the aviation dosimetry model AVIDOS based on FLUKA-2005 simulations for routine dose assessment. The code has been developed by Austrian Research Centers (ARC) for the public usage (http://avidos.healthphysics.at).

  5. G4DARI: Geant4/GATE based Monte Carlo simulation interface for dosimetry calculation in radiotherapy.

    PubMed

    Slimani, Faiçal A A; Hamdi, Mahdjoub; Bentourkia, M'hamed

    2018-05-01

    Monte Carlo (MC) simulation is widely recognized as an important technique to study the physics of particle interactions in nuclear medicine and radiation therapy. There are different codes dedicated to dosimetry applications and widely used today in research or in clinical application, such as MCNP, EGSnrc and Geant4. However, such codes made the physics easier but the programming remains a tedious task even for physicists familiar with computer programming. In this paper we report the development of a new interface GEANT4 Dose And Radiation Interactions (G4DARI) based on GEANT4 for absorbed dose calculation and for particle tracking in humans, small animals and complex phantoms. The calculation of the absorbed dose is performed based on 3D CT human or animal images in DICOM format, from images of phantoms or from solid volumes which can be made from any pure or composite material to be specified by its molecular formula. G4DARI offers menus to the user and tabs to be filled with values or chemical formulas. The interface is described and as application, we show results obtained in a lung tumor in a digital mouse irradiated with seven energy beams, and in a patient with glioblastoma irradiated with five photon beams. In conclusion, G4DARI can be easily used by any researcher without the need to be familiar with computer programming, and it will be freely available as an application package. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Êtes-vous prêt pour un code bleu en cabinet?

    PubMed Central

    Moore, Simon

    2015-01-01

    Résumé Problème traité Les urgences médicales sont fréquentes dans le cabinet des médecins de famille, mais nombreux sont ceux qui ne sont pas préparés à répondre aux urgences. Une vidéo éducative en ligne traitant des urgences en cabinet pourrait améliorer la réponse des médecins et de leur personnel aux urgences. Un outil comme celui-là n’a jamais été décrit auparavant. Objectif du programme Utiliser les pratiques fondées sur les données probantes pour produire une vidéo éducative expliquant comment se préparer à parer à une urgence dans un cabinet médical, disséminer la vidéo en ligne et évaluer l’attitude des médecins et de leur personnel à l’égard de la vidéo. Description du programme Une vidéo de 6 minutes a été produite en s’appuyant sur une revue de la littérature récente et des politiques des organismes réglementaires canadiens. La vidéo décrit l’équipement d’urgence recommandé, l’amélioration de la réponse à l’urgence et la formation du personnel en cabinet. Les médecins et leur personnel ont été invités à visualiser la vidéo en ligne à www.OfficeEmergencies.ca. L’opinion de l’auditoire sur le format de la vidéo et son contenu a été évaluée par l’entremise d’un sondage (N = 275). Conclusion Les résultats du sondage indiquent que la vidéo était pertinente et bien présentée, et le format en ligne était pratique et satisfaisant. Les participants s’inscriraient à d’autres formations ayant recours à cette technologie et étaient d’accord pour dire que ce programme améliorerait les soins aux patients.

  7. Spatial large-eddy simulations of contrail formation in the wake of an airliner

    NASA Astrophysics Data System (ADS)

    Paoli, R.

    2015-12-01

    Contrails and contrail-cirrus are the most uncertain contributors to aviation radiative forcing. In order to reduce this uncertainty one needs to gain more knowledge on the physicochemical processes occurring in the aircraft plume, which eventually lead to the transformation of contrails into cirrus. To that end, the accurate prediction of the number of activated particles and their spatial and size distributions at the end of the jet regime may be helpful to initialize simulations in the following vortex regime. We present the results from spatial large-eddy simulations (LES) of contrail formation in the near-field wake of a generic (but full-scale) airliner that is representative of those used in long-haul flights in current fleets. The flow around the aircraft has been computed using a RANS code taking into account the full geometry that include the engines and the aerodynamic set-up for cruise conditions. The data have been reconstructed at a plane closely behind the trailing edge of the wing and used as inflow boundary conditions for the LES. We employ fully compressible 3D LES coupled to Lagrangian microphysical module that tracks parcels of ice particles individually. The ice microphysical model is simple yet it contains the basic thermodynamic ingredients to model soot activation and water vapor deposition. Compared to one-dimensional models or even RANS, LES allow for more accurate predictions of the mixing between exhaust and ambient air. Hence, the number of activated particles and the ice growth rate can be also determined with higher accuracy. This is particularly crucial for particles located at the edge of the jet that experience large gradients of temperature and humidity. The results of the fully coupled LES (where the gas phase and the particles are solved together) are compared to offline simulations where the ice microphysics model is run using thermodynamic data from pre-calculated particle trajectories extracted from inert LES (where ice microphysics has been switched off).

  8. Ensemble des troubles causés par l'alcoolisation fœtale : lignes directrices canadiennes concernant le diagnostic

    PubMed Central

    Chudley, Albert E.; Conry, Julianne; Cook, Jocelynn L.; Loock, Christine; Rosales, Ted; LeBlanc, Nicole

    2005-01-01

    Résumé LE DIAGNOSTIC DE L'ENSEMBLE DES TROUBLES CAUSÉS PAR L'ALCOOLISATION FÉTALE (ETCAF) est complexe et l'élaboration de lignes directrices concernant le diagnostic est justifiée. Un sous-comité du Comité consultatif national de l'Agence de santé publique du Canada sur l'Ensemble des troubles causés par l'alcoolisation fœtale a examiné, analysé et intégré les méthodes de diagnostic actuelles afin de parvenir à une méthode de diagnostic normalisée faisant le consensus au Canada. L'objet du présent document est d'examiner et de clarifier l'utilisation des systèmes de diagnostic actuels et de formuler des recommandations quant à leur application pour le diagnostic des déficiences liées à l'ETCAF chez des individus de tous les âges. Les lignes directrices sont fondées sur un vaste consensus de praticiens et d'autres intervenants spécialisés dans le domaine. Ces lignes directrices ont été organisées en sept catégories, soit le dépistage et l'orientation vers les spécialistes, l'examen physique et le diagnostic différentiel, l'évaluation du comportement neurologique, le traitement et le suivi, les antécédents de consommation d'alcool de la mère pendant la grossesse, les critères du diagnostic pour le syndrome d'alcoolisation fœtale (SAF), SAF partiel et troubles neurologiques du développement liés à l'alcool, l'harmonisation de l'Institute of Medicine (IOM) et des approches du Code diagnostique à 4 chiffres. Le diagnostic exige une évaluation complète des antécédents, ainsi qu'un examen physique et du comportement neurologique, tout en recourant à une approche multidisciplinaire. Les présentes lignes directrices pour le diagnostic du syndrome d'alcoolisation fœtale et des déficiences qui y sont associées sont les premières à avoir été élaborées au Canada et elles sont fondées sur la consultation d'un grand éventail de spécialistes du diagnostic.

  9. Grid-Independent Large-Eddy Simulation in Turbulent Channel Flow using Three-Dimensional Explicit Filtering

    NASA Technical Reports Server (NTRS)

    Gullbrand, Jessica

    2003-01-01

    In this paper, turbulence-closure models are evaluated using the 'true' LES approach in turbulent channel flow. The study is an extension of the work presented by Gullbrand (2001), where fourth-order commutative filter functions are applied in three dimensions in a fourth-order finite-difference code. The true LES solution is the grid-independent solution to the filtered governing equations. The solution is obtained by keeping the filter width constant while the computational grid is refined. As the grid is refined, the solution converges towards the true LES solution. The true LES solution will depend on the filter width used, but will be independent of the grid resolution. In traditional LES, because the filter is implicit and directly connected to the grid spacing, the solution converges towards a direct numerical simulation (DNS) as the grid is refined, and not towards the solution of the filtered Navier-Stokes equations. The effect of turbulence-closure models is therefore difficult to determine in traditional LES because, as the grid is refined, more turbulence length scales are resolved and less influence from the models is expected. In contrast, in the true LES formulation, the explicit filter eliminates all scales that are smaller than the filter cutoff, regardless of the grid resolution. This ensures that the resolved length-scales do not vary as the grid resolution is changed. In true LES, the cell size must be smaller than or equal to the cutoff length scale of the filter function. The turbulence-closure models investigated are the dynamic Smagorinsky model (DSM), the dynamic mixed model (DMM), and the dynamic reconstruction model (DRM). These turbulence models were previously studied using two-dimensional explicit filtering in turbulent channel flow by Gullbrand & Chow (2002). The DSM by Germano et al. (1991) is used as the USFS model in all the simulations. This enables evaluation of different reconstruction models for the RSFS stresses. The DMM consists of the scale-similarity model (SSM) by Bardina et al. (1983), which is an RSFS model, in linear combination with the DSM. In the DRM, the RSFS stresses are modeled by using an estimate of the unfiltered velocity in the unclosed term, while the USFS stresses are modeled by the DSM. The DSM and the DMM are two commonly used turbulence-closure models, while the DRM is a more recent model.

  10. A modern Monte Carlo investigation of the TG-43 dosimetry parameters for an {sup 125}I seed already having AAPM consensus data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aryal, Prakash; Molloy, Janelle A.; Rivard, Mark J., E-mail: mark.j.rivard@gmail.com

    2014-02-15

    Purpose: To investigate potential causes for differences in TG-43 brachytherapy dosimetry parameters in the existent literature for the model IAI-125A{sup 125}I seed and to propose new standard dosimetry parameters. Methods: The MCNP5 code was used for Monte Carlo (MC) simulations. Sensitivity of dose distributions, and subsequently TG-43 dosimetry parameters, was explored to reproduce historical methods upon which American Association of Physicists in Medicine (AAPM) consensus data are based. Twelve simulation conditions varying{sup 125}I coating thickness, coating mass density, photon interaction cross-section library, and photon emission spectrum were examined. Results: Varying{sup 125}I coating thickness, coating mass density, photon cross-section library, andmore » photon emission spectrum for the model IAI-125A seed changed the dose-rate constant by up to 0.9%, about 1%, about 3%, and 3%, respectively, in comparison to the proposed standard value of 0.922 cGy h{sup −1} U{sup −1}. The dose-rate constant values by Solberg et al. [“Dosimetric parameters of three new solid core {sup 125}I brachytherapy sources,” J. Appl. Clin. Med. Phys. 3, 119–134 (2002)], Meigooni et al. [“Experimental and theoretical determination of dosimetric characteristics of IsoAid ADVANTAGE™ {sup 125}I brachytherapy source,” Med. Phys. 29, 2152–2158 (2002)], and Taylor and Rogers [“An EGSnrc Monte Carlo-calculated database of TG-43 parameters,” Med. Phys. 35, 4228–4241 (2008)] for the model IAI-125A seed and Kennedy et al. [“Experimental and Monte Carlo determination of the TG-43 dosimetric parameters for the model 9011 THINSeed™ brachytherapy source,” Med. Phys. 37, 1681–1688 (2010)] for the model 6711 seed were +4.3% (0.962 cGy h{sup −1} U{sup −1}), +6.2% (0.98 cGy h{sup −1} U{sup −1}), +0.3% (0.925 cGy h{sup −1} U{sup −1}), and −0.2% (0.921 cGy h{sup −1} U{sup −1}), respectively, in comparison to the proposed standard value. Differences in the radial dose functions between the current study and both Solberg et al. and Meigooni et al. were <10% for r ≤ 5 cm, and increased for r > 5 cm with a maximum difference of 29% at r = 9 cm. In comparison to Taylor and Rogers, these differences were lower (maximum of 2% at r = 9 cm). For the similarly designed model 6711 {sup 125}I seed, differences did not exceed 0.5% for 0.5 ≤ r ≤ 10 cm. Radial dose function values varied by 1% as coating thickness and coating density were changed. Varying the cross-section library and source spectrum altered the radial dose function by 25% and 12%, respectively, but these differences occurred at r = 10 cm where the dose rates were very low. The 2D anisotropy function results were most similar to those of Solberg et al. and most different to those of Meigooni et al. The observed order of simulation condition variables from most to least important for influencing the 2D anisotropy function was spectrum, coating thickness, coating density, and cross-section library. Conclusions: Several MC radiation transport codes are available for calculation of the TG-43 dosimetry parameters for brachytherapy seeds. The physics models in these codes and their related cross-section libraries have been updated and improved since publication of the 2007 AAPM TG-43U1S1 report. Results using modern data indicated statistically significant differences in these dosimetry parameters in comparison to data recommended in the TG-43U1S1 report. Therefore, it seems that professional societies such as the AAPM should consider reevaluating the consensus data for this and others seeds and establishing a process of regular evaluations in which consensus data are based upon methods that remain state-of-the-art.« less

  11. Hydrogeologic uncertainties and policy implications: The Water Consumer Protection Act of Tucson, Arizona, USA

    NASA Astrophysics Data System (ADS)

    Wilson, L. G.; Matlock, W. G.; Jacobs, K. L.

    The 1995 Water Consumer Protection Act of Tucson, Arizona, USA (hereafter known as the Act) was passed following complaints from Tucson Water customers receiving treated Central Arizona Project (CAP) water. Consequences of the Act demonstrate the uncertainties and difficulties that arise when the public is asked to vote on a highly technical issue. The recharge requirements of the Act neglect hydrogeological uncertainties because of confusion between "infiltration" and "recharge." Thus, the Act implies that infiltration in stream channels along the Central Wellfield will promote recharge in the Central Wellfield. In fact, permeability differences between channel alluvium and underlying basin-fill deposits may lead to subjacent outflow. Additionally, even if recharge of Colorado River water occurs in the Central Wellfield, groundwater will become gradually salinized. The Act's restrictions on the use of CAP water affect the four regulatory mechanisms in Arizona's 1980 Groundwater Code as they relate to the Tucson Active Management Area: (a) supply augmentation; (b) requirements for groundwater withdrawals and permitting; (c) Management Plan requirements, particularly mandatory conservation and water-quality issues; and (d) the requirement that all new subdivisions use renewable water supplies in lieu of groundwater. Political fallout includes disruption of normal governmental activities because of the demands in implementing the Act. Résumé La loi de 1995 sur la protection des consommateurs d'eau de Tucson (Arizona, États-Unis) a été promulguée à la suite des réclamations des consommateurs d'eau de Tucson alimentés en eau traitée à partir à la station centrale d'Arizona (CAP). Les conséquences de cette loi montrent les incertitudes et les difficultés qui apparaissent lorsque le public est appeléà voter sur un problème très technique. Les exigences de la loi en matière de recharge négligent les incertitudes hydrogéologiques du fait de la confusion entre "infiltration" et "recharge". C'est ainsi que la loi laisse entendre que l'infiltration à partir des lits de rivières le long du champ captant central favorise la recharge de cette zone. En réalité, les différences de perméabilité entre les alluvions du lit et les dépôts sous-jacents remplissant le bassin peuvent provoquer un écoulement sous-jacent. En outre, même si une recharge par l'eau de la rivière Colorado se produit dans cette zone, la nappe sera progressivement salifiée. Les restrictions imposées par la loi quant à l'utilisation de l'eau de la station centrale d'Arizona affectent les quatre outils réglementaires du Code des eaux souterraines de l'Arizona de 1980, en ce qu'ils concernent la zone de gestion active de Tucson: (a) l'augmentation de l'approvisionnement (b) les conditions requises pour les prélèvements d'eau souterraine et les autorisations; (c) les conditions requises pour le plan de gestion, en particulier la pérennité du concessionnaire et les résultats en matière de qualité de l'eau et (d) la condition que tous les nouveaux districts aient recours à des ressources en eau renouvelables à la place de l'eau souterraine. Les demandes concernant la mise en oeuvre de la loi ont conduit jusqu'à l'arrêt des activités normales des instances politiques. Resumen El Acta de Protección de los Usuarios de Agua de Tucson, Arizona (EE.UU.) de 1995 (el Acta) se aprobó a raíz de las quejas de los usuarios de agua de Tucson que recibían agua tratada por el Proyecto de Arizona Central (CAP). Las consecuencias del Acta demuestran las incertidumbres y dificultades que se producen cuando se le pide al público que vote sobre temas muy técnicos. Los requerimientos de recarga del Acta desprecian incertidumbres hidrogeológicas al confundir entre "infiltración" y "recarga". Así, el Acta dice que la infiltración en los canales de los arroyos a lo largo del Campo de Producción Central aumentará la recarga a dicho campo. De hecho, la diferencia de permeabilidad entre el canal aluvial y los depósitos de relleno subyacentes puede provocar descarga subyacente. Además, incluso si el Río Colorado recargase este Campo Central, el agua subterránea se salinizaría progresivamente. Las restricciones del Acta sobre el uso del agua del CAP afectan los cuatro mecanismos legales del Código de Aguas Subterráneas de Arizona de 1980 relacionados con el Área de Gestión Activa de Tucson: (a) aumento del suministro; (b) requisitos y permisos para la extracción de aguas subterráneas (c) necesidades del Plan de Gestión, particularmente la conservación obligatoria y temas de calidad de aguas; y (d) la obligación de que todas las nuevas subdivisiones usen agua con suministro renovable en lugar de subterránea. Como problemas adicionales se incluye la perturbación de las actividades políticas normales como consecuencia de las demandas para aprobar y poner en marcha el Acta.

  12. Development of probabilistic internal dosimetry computer code

    NASA Astrophysics Data System (ADS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki

    2017-02-01

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of severe internal exposure, the causation probability of a deterministic health effect can be derived from the dose distribution, and a high statistical value ( e.g., the 95th percentile of the distribution) can be used to determine the appropriate intervention. The distribution-based sensitivity analysis can also be used to quantify the contribution of each factor to the dose uncertainty, which is essential information for reducing and optimizing the uncertainty in the internal dose assessment. Therefore, the present study can contribute to retrospective dose assessment for accidental internal exposure scenarios, as well as to internal dose monitoring optimization and uncertainty reduction.

  13. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.

  14. WE-B-207-02: CT Lung Cancer Screening and the Medical Physicist: A Dosimetry Summary of CT Participants in the National Lung Cancer Screening Trial (NLST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.

    2015-06-15

    The US National Lung Screening Trial (NLST) was a multi-center randomized, controlled trial comparing a low-dose CT (LDCT) to posterior-anterior (PA) chest x-ray (CXR) in screening older, current and former heavy smokers for early detection of lung cancer. Recruitment was launched in September 2002 and ended in April 2004 when 53,454 participants had been randomized at 33 screening sites in equal proportions. Funded by the National Cancer Institute this trial demonstrated that LDCT screening reduced lung cancer mortality. The US Preventive Services Task Force (USPSTF) cited NLST findings and conclusions in its deliberations and analysis of lung cancer screening. Undermore » the 2010 Patient Protection and Affordable Care Act, the USPSTF favorable recommendation regarding lung cancer CT screening assisted in obtaining third-party payers coverage for screening. The objective of this session is to provide an introduction to the NLST and the trial findings, in addition to a comprehensive review of the dosimetry investigations and assessments completed using individual NLST participant CT and CXR examinations. Session presentations will review and discuss the findings of two independent assessments, a CXR assessment and the findings of a CT investigation calculating individual organ dosimetry values. The CXR assessment reviewed a total of 73,733 chest x-ray exams that were performed on 92 chest imaging systems of which 66,157 participant examinations were used. The CT organ dosimetry investigation collected scan parameters from 23,773 CT examinations; a subset of the 75,133 CT examinations performed using 97 multi-detector CT scanners. Organ dose conversion coefficients were calculated using a Monte Carlo code. An experimentally-validated CT scanner simulation was coupled with 193 adult hybrid computational phantoms representing the height and weight of the current U.S. population. The dose to selected organs was calculated using the organ dose library and the abstracted scan parameters. This session will review the results and summarize the individualized doses to major organs and the mean effective dose and CTDIvol estimate for 66,157 PA chest and 23,773 CT examinations respectively, using size-dependent computational phantoms coupled with Monte Carlo calculations. Learning Objectives: Review and summarize relevant NLST findings and conclusions. Understand the scope and scale of the NLST specific to participant dosimetry. Provide a comprehensive review of NLST participant dosimetry assessments. Summarize the results of an investigation providing individualized organ dose estimates for NLST participant cohorts.« less

  15. Monte Carlo Simulations Comparing the Response of a Novel Hemispherical Tepc to Existing Spherical and Cylindrical Tepcs for Neutron Monitoring and Dosimetry.

    PubMed

    Broughton, David P; Waker, Anthony J

    2017-05-01

    Neutron dosimetry in reactor fields is currently mainly conducted with unwieldy flux monitors. Tissue Equivalent Proportional Counters (TEPCs) have been shown to have the potential to improve the accuracy of neutron dosimetry in these fields, and Multi-Element Tissue Equivalent Proportional Counters (METEPCs) could reduce the size of instrumentation required to do so. Complexity of current METEPC designs has inhibited their use beyond research. This work proposes a novel hemispherical counter with a wireless anode ball in place of the traditional anode wire as a possible solution for simplifying manufacturing. The hemispherical METEPC element was analyzed as a single TEPC to first demonstrate the potential of this new design by evaluating its performance relative to the reference spherical TEPC design and a single element from a cylindrical METEPC. Energy deposition simulations were conducted using the Monte Carlo code PHITS for both monoenergetic 2.5 MeV neutrons and the neutron energy spectrum of Cf-D2O moderated. In these neutron fields, the hemispherical counter appears to be a good alternative to the reference spherical geometry, performing slightly better than the cylindrical counter, which tends to underrespond to H*(10) for the lower neutron energies of the Cf-D2O moderated field. These computational results are promising, and if follow-up experimental work demonstrates the hemispherical counter works as anticipated, it will be ready to be incorporated into an METEPC design.

  16. Single ionization and capture cross sections from biological molecules by bare projectile impact*

    NASA Astrophysics Data System (ADS)

    Quinto, Michele A.; Monti, Juan M.; Montenegro, Pablo D.; Fojón, Omar A.; Champion, Christophe; Rivarola, Roberto D.

    2017-02-01

    We report calculations on single differential and total cross sections for single ionization and single electron capture from biological targets, namely, vapor water and DNA nucleobasese molecules, by bare projectile impact: H+, He2+, and C6+. They are performed within the Continuum Distorted Wave - Eikonal Initial State approximation and compared to several existing experimental data. This study is oriented to the obtention of a reliable set of theoretical data to be used as input in a Monte Carlo code destined to micro- and nano- dosimetry.

  17. The computation of ICRP dose coefficients for intakes of radionuclides with PLEIADES: biokinetic aspects.

    PubMed

    Fell, T P

    2007-01-01

    The ICRP has published dose coefficients for the ingestion or inhalation of radionuclides in a series of reports covering intakes by workers and members of the public including children and pregnant or lactating women. The calculation of these coefficients conveniently divides into two distinct parts--the biokinetic and dosimetric. This paper gives a brief summary of the methods used to solve the biokinetic problem in the generation of dose coefficients on behalf of the ICRP, as implemented in the Health Protection Agency's internal dosimetry code PLEIADES.

  18. Effect of artificial length scales in large eddy simulation of a neutral atmospheric boundary layer flow: A simple solution to log-layer mismatch

    NASA Astrophysics Data System (ADS)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2017-07-01

    A large eddy simulation (LES) methodology coupled with near-wall modeling has been implemented in the current study for high Re neutral atmospheric boundary layer flows using an exponentially accurate spectral element method in an open-source research code Nek 5000. The effect of artificial length scales due to subgrid scale (SGS) and near wall modeling (NWM) on the scaling laws and structure of the inner and outer layer eddies is studied using varying SGS and NWM parameters in the spectral element framework. The study provides an understanding of the various length scales and dynamics of the eddies affected by the LES model and also the fundamental physics behind the inner and outer layer eddies which are responsible for the correct behavior of the mean statistics in accordance with the definition of equilibrium layers by Townsend. An economical and accurate LES model based on capturing the near wall coherent eddies has been designed, which is successful in eliminating the artificial length scale effects like the log-layer mismatch or the secondary peak generation in the streamwise variance.

  19. Development of the Glenn-HT Computer Code to Enable Time-Filtered Navier-Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Philip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations which are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminarturbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes which take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-HT code and applied to film cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30 holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and four blowing ratios of 0.5, 1.0, 1.5 and 2.0 are shown. Flow features under those conditions are also described.

  20. Protocols for the dosimetry of high-energy photon and electron beams: a comparison of the IAEA TRS-398 and previous international Codes of Practice

    NASA Astrophysics Data System (ADS)

    Andreo, Pedro; Saiful Huq, M.; Westermark, Mathias; Song, Haijun; Tilikidis, Aris; DeWerd, Larry; Shortt, Ken

    2002-09-01

    A new international Code of Practice for radiotherapy dosimetry co-sponsored by several international organizations has been published by the IAEA, TRS-398. It is based on standards of absorbed dose to water, whereas previous protocols (TRS-381 and TRS-277) were based on air kerma standards. To estimate the changes in beam calibration caused by the introduction of TRS-398, a detailed experimental comparison of the dose determination in reference conditions in high-energy photon and electron beams has been made using the different IAEA protocols. A summary of the formulation and reference conditions in the various Codes of Practice, as well as of their basic data, is presented first. Accurate measurements have been made in 25 photon and electron beams from 10 clinical accelerators using 12 different cylindrical and plane-parallel chambers, and dose ratios under different conditions of TRS-398 to the other protocols determined. A strict step-by-step checklist was followed by the two participating clinical institutions to ascertain that the resulting calculations agreed within tenths of a per cent. The maximum differences found between TRS-398 and the previous Codes of Practice TRS-277 (2nd edn) and TRS-381 are of the order of 1.5-2.0%. TRS-398 yields absorbed doses larger than the previous protocols, around 1.0% for photons (TRS-277) and for electrons (TRS-381 and TRS-277) when plane-parallel chambers are cross-calibrated. For the Markus chamber, results show a very large variation, although a fortuitous cancellation of the old stopping powers with the ND,w/NK ratios makes the overall discrepancy between TRS-398 and TRS-277 in this case smaller than for well-guarded plane-parallel chambers. Chambers of the Roos-type with a 60Co ND,w calibration yield the maximum discrepancy in absorbed dose, which varies between 1.0% and 1.5% for TRS-381 and between 1.5% and 2.0% for TRS-277. Photon beam calibrations using directly measured or calculated TPR20,10 from a percentage dose data at SSD = 100 cm were found to be indistinguishable. Considering that approximately 0.8% of the differences between TRS-398 and the NK-based protocols are caused by the change to the new type of standards, the remaining difference in absolute dose is due either to a close similarity in basic data or to a fortuitous cancellation of the discrepancies in data and type of chamber calibration. It is emphasized that the NK-ND,air and ND,w formalisms have very similar uncertainty when the same criteria are used for both procedures. Arguments are provided in support of the recommendation for a change in reference dosimetry based on standards of absorbed dose to water.

  1. The work programme of EURADOS on internal and external dosimetry.

    PubMed

    Rühm, W; Bottollier-Depois, J F; Gilvin, P; Harrison, R; Knežević, Ž; Lopez, M A; Tanner, R; Vargas, A; Woda, C

    2018-01-01

    Since the early 1980s, the European Radiation Dosimetry Group (EURADOS) has been maintaining a network of institutions interested in the dosimetry of ionising radiation. As of 2017, this network includes more than 70 institutions (research centres, dosimetry services, university institutes, etc.), and the EURADOS database lists more than 500 scientists who contribute to the EURADOS mission, which is to promote research and technical development in dosimetry and its implementation into practice, and to contribute to harmonisation of dosimetry in Europe and its conformance with international practices. The EURADOS working programme is organised into eight working groups dealing with environmental, computational, internal, and retrospective dosimetry; dosimetry in medical imaging; dosimetry in radiotherapy; dosimetry in high-energy radiation fields; and harmonisation of individual monitoring. Results are published as freely available EURADOS reports and in the peer-reviewed scientific literature. Moreover, EURADOS organises winter schools and training courses on various aspects relevant for radiation dosimetry, and formulates the strategic research needs in dosimetry important for Europe. This paper gives an overview on the most important EURADOS activities. More details can be found at www.eurados.org .

  2. Monte Carlo Investigation on the Effect of Heterogeneities on Strut Adjusted Volume Implant (SAVI) Dosimetry

    NASA Astrophysics Data System (ADS)

    Koontz, Craig

    Breast cancer is the most prevalent cancer for women with more than 225,000 new cases diagnosed in the United States in 2012 (ACS, 2012). With the high prevalence, comes an increased emphasis on researching new techniques to treat this disease. Accelerated partial breast irradiation (APBI) has been used as an alternative to whole breast irradiation (WBI) in order to treat occult disease after lumpectomy. Similar recurrence rates have been found using ABPI after lumpectomy as with mastectomy alone, but with the added benefit of improved cosmetic and psychological results. Intracavitary brachytherapy devices have been used to deliver the APBI prescription. However, inability to produce asymmetric dose distributions in order to avoid overdosing skin and chest wall has been an issue with these devices. Multi-lumen devices were introduced to overcome this problem. Of these, the Strut-Adjusted Volume Implant (SAVI) has demonstrated the greatest ability to produce an asymmetric dose distribution, which would have greater ability to avoid skin and chest wall dose, and thus allow more women to receive this type of treatment. However, SAVI treatments come with inherent heterogeneities including variable backscatter due to the proximity to the tissue-air and tissue-lung interfaces and variable contents within the cavity created by the SAVI. The dose calculation protocol based on TG-43 does not account for heterogeneities and thus will not produce accurate dosimetry; however Acuros, a model-based dose calculation algorithm manufactured by Varian Medical Systems, claims to accurately account for heterogeneities. Monte Carlo simulation can calculate the dosimetry with high accuracy. In this thesis, a model of the SAVI will be created for Monte Carlo, specifically using MCNP code, in order to explore the affects of heterogeneities on the dose distribution. This data will be compared to TG-43 and Acuros calculated dosimetry to explore their accuracy.

  3. Sci-Thur AM: YIS – 04: Stopping power-to-Cherenkov power ratios and beam quality specification for clinical Cherenkov emission dosimetry of electrons: beam-specific effects and experimental validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zlateva, Yana; Seuntjens, Jan; El Naqa, Issam

    Purpose: To advance towards clinical Cherenkov emission (CE)-based dosimetry by investigating beam-specific effects on Monte Carlo-calculated electron-beam stopping power-to-CE power ratios (SCRs), addressing electron beam quality specification in terms of CE, and validating simulations with measurements. Methods: The EGSnrc user code SPRRZnrc, used to calculate Spencer-Attix stopping-power ratios, was modified to instead calculate SCRs. SCRs were calculated for 6- to 22-MeV clinical electron beams from Varian TrueBeam, Clinac 21EX, and Clinac 2100C/D accelerators. Experiments were performed with a 20-MeV electron beam from a Varian TrueBeam accelerator, using a diffraction grating spectrometer with optical fiber input and a cooled back-illuminated CCD.more » A fluorophore was dissolved in the water to remove CE signal anisotropy. Results: It was found that angular spread of the incident beam has little effect on the SCR (≤ 0.3% at d{sub max}), while both the electron spectrum and photon contamination increase the SCR at shallow depths and decrease it at large depths. A universal data fit of R{sub 50} in terms of C{sub 50} (50% CE depth) revealed a strong linear dependence (R{sup 2} > 0.9999). The SCR was fit with a Burns-type equation (R{sup 2} = 0.9974, NRMSD = 0.5%). Below-threshold incident radiation was found to have minimal effect on beam quality specification (< 0.1%). Experiments and simulations were in good agreement. Conclusions: Our findings confirm the feasibility of the proposed CE dosimetry method, contingent on computation of SCRs from additional accelerators and on further experimental validation. This work constitutes an important step towards clinical high-resolution out-of-beam CE dosimetry.« less

  4. SU‐C‐105‐05: Reference Dosimetry of High‐Energy Electron Beams with a Farmer‐Type Ionization Chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, B; Rogers, D

    2013-06-15

    Purpose: To investigate gradient effects and provide Monte Carlo calculated beam quality conversion factors to characterize the Farmer‐type NE2571 ion chamber for high‐energy reference dosimetry of clinical electron beams. Methods: The EGSnrc code system is used to calculate the absorbed dose to water and to the gas in a fully modeled NE2571 chamber as a function of depth in a water phantom. Electron beams incident on the surface of the phantom are modeled using realistic BEAMnrc accelerator simulations and electron beam spectra. Beam quality conversion factors are determined using calculated doses to water and to air in the chamber inmore » high‐energy electron beams and in a cobalt‐60 reference field. Calculated water‐to‐air stopping power ratios are employed for investigation of the overall ion chamber perturbation factor. Results: An upstream shift of 0.3–0.4 multiplied by the chamber radius, r-cav, both minimizes the variation of the overall ion chamber perturbation factor with depth and reduces the difference between the beam quality specifier (R{sub 5} {sub 0}) calculated using ion chamber simulations and that obtained with simulations of dose‐to‐water in the phantom. Beam quality conversion factors are obtained at the reference depth and gradient effects are optimized using a shift of 0.2r-cav. The photon‐electron conversion factor, k-ecal, amounts to 0.906 when gradient effects are minimized using the shift established here and 0.903 if no shift of the data is used. Systematic uncertainties in beam quality conversion factors are investigated and amount to between 0.4 to 1.1% depending on assumptions used. Conclusion: The calculations obtained in this work characterize the use of an NE2571 ion chamber for reference dosimetry of high‐energy electron beams. These results will be useful as the AAPM continues to review their reference dosimetry protocols.« less

  5. SU-D-213-06: Dosimetry of Modulated Electron Radiation Therapy Using Fricke Gel Dosimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gawad, M Abdel; Elgohary, M; Hassaan, M

    Purpose: Modulated electron radiation therapy (MERT) has been proposed as an effective modality for treatment of superficial targets. MERT utilizes multiple beams of different energies which are intensity modulated to deliver optimized dose distribution. Energy independent dosimeters are thus needed for quantitative evaluations of MERT dose distributions and measurements of absolute doses delivered to patients. Thus in the current work we study the feasibility of Fricke gel dosimeters in MERT dosimetry. Methods: Batches of radiation sensitive Fricke gel is fabricated and poured into polymethyl methacrylate cuvettes. The samples were irradiated in solid water phantom and a thick layer of bolusmore » was used as a buildup. A spectrophotometer system was used for measuring the color changes (the absorbance) before and after irradiation and then we calculate net absorbance. We constructed calibration curves to relate the measured absorbance in terms of absorbed dose for all available electron energies. Dosimetric measurements were performed for mixed electron beam delivery and we also performed measurement for segmented field delivery with the dosimeter placed at the junction of two adjacent electron beams of different energies. Dose measured by our gel dosimetry is compared to that calculation from our precise treatment planning system. We also initiated a Monte Carlo study to evaluate the water equivalence of our dosimeters. MCBEAM and MCSIM codes were used for treatment head simulation and phantom dose calculation. PDDs and profiles were calculated for electron beams incident on a phantom designed with 1cm slab of Fricke gel. Results: The calibration curves showed no observed energy dependence with all studied electron beam energies. Good agreement was obtained between dose calculated and that obtained by gel dosimetry. Monte Carlo results illustrated the tissue equivalency of our Gel dosimeters. Conclusion: Fricke Gel dosimeters represent a good option for the dosimetric quality assurance prior to MERT application.« less

  6. Pediatric dosimetry for intrapleural lung injections of 32P chromic phosphate

    NASA Astrophysics Data System (ADS)

    Konijnenberg, Mark W.; Olch, Arthur

    2010-10-01

    Intracavitary injections of 32P chromic phosphate are used in the therapy of pleuropulmonary blastoma and pulmonary sarcomas in children. The lung dose, however, has never been calculated despite the potential risk of lung toxicity from treatment. In this work the dosimetry has been calculated in target tissue and lung for pediatric phantoms. Pleural cavities were modeled in the Monte Carlo code MCNP within the pediatric MIRD phantoms. Both the depth-dose curves in the pleural lining and into the lung as well as 3D dose distributions were calculated for either homogeneous or inhomogeneous 32P activity distributions. Dose-volume histograms for the lung tissue and isodose graphs were generated. The results for the 2D depth-dose curve to the pleural lining and tumor around the pleural cavity correspond well with the point kernel model-based recommendations. With a 2 mm thick pleural lining, one-third of the lung parenchyma volume gets a dose more than 30 Gy (V30) for 340 MBq 32P in a 10 year old. This is close to lung tolerance. Younger children will receive a larger dose to the lung when the lung density remains equal to the adult value; the V30 relative lung volume for a 5 year old is 35% at an activity of 256 MBq and for a 1 year old 165 MBq yields a V30 of 43%. At higher densities of the lung tissue V30 stays below 32%. All activities yield a therapeutic dose of at least 225 Gy in the pleural lining. With a more normal pleural lining thickness (0.5 mm instead of 2 mm) the injected activities will have to be reduced by a factor 5 to obtain tolerable lung doses in pediatric patients. Previous dosimetry recommendations for the adult apply well down to lung surface areas of 400 cm2. Monte Carlo dosimetry quantitates the three-dimensional dose distribution, providing a better insight into the maximum tolerable activity for this therapy.

  7. A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.

    2006-01-01

    The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.

  8. An electron-beam dose deposition experiment: TIGER 1-D simulation code versus thermoluminescent dosimetry

    NASA Astrophysics Data System (ADS)

    Murrill, Steven R.; Tipton, Charles W.; Self, Charles T.

    1991-03-01

    The dose absorbed in an integrated circuit (IC) die exposed to a pulse of low-energy electrons is a strong function of both electron energy and surrounding packaging materials. This report describes an experiment designed to measure how well the Integrated TIGER Series one-dimensional (1-D) electron transport simulation program predicts dose correction factors for a state-of-the-art IC package and package/printed circuit board (PCB) combination. These derived factors are compared with data obtained experimentally using thermoluminescent dosimeters (TLD's) and the FX-45 flash x-ray machine (operated in electron-beam (e-beam) mode). The results of this experiment show that the TIGER 1-D simulation code can be used to accurately predict FX-45 e-beam dose deposition correction factors for reasonably complex IC packaging configurations.

  9. Internal dosimetry through GATE simulations of preclinical radiotherapy using a melanin-targeting ligand

    NASA Astrophysics Data System (ADS)

    Perrot, Y.; Degoul, F.; Auzeloux, P.; Bonnet, M.; Cachin, F.; Chezal, J. M.; Donnarieix, D.; Labarre, P.; Moins, N.; Papon, J.; Rbah-Vidal, L.; Vidal, A.; Miot-Noirault, E.; Maigne, L.

    2014-05-01

    The GATE Monte Carlo simulation platform based on the Geant4 toolkit is under constant improvement for dosimetric calculations. In this study, we explore its use for the dosimetry of the preclinical targeted radiotherapy of melanoma using a new specific melanin-targeting radiotracer labeled with iodine 131. Calculated absorbed fractions and S values for spheres and murine models (digital and CT-scan-based mouse phantoms) are compared between GATE and EGSnrc Monte Carlo codes considering monoenergetic electrons and the detailed energy spectrum of iodine 131. The behavior of Geant4 standard and low energy models is also tested. Following the different authors’ guidelines concerning the parameterization of electron physics models, this study demonstrates an agreement of 1.2% and 1.5% with EGSnrc, respectively, for the calculation of S values for small spheres and mouse phantoms. S values calculated with GATE are then used to compute the dose distribution in organs of interest using the activity distribution in mouse phantoms. This study gives the dosimetric data required for the translation of the new treatment to the clinic.

  10. Development and application of a 3-D geometry/mass model for LDEF satellite ionizing radiation assessments

    NASA Technical Reports Server (NTRS)

    Colborn, B. L.; Armstrong, T. W.

    1992-01-01

    A computer model of the three dimensional geometry and material distributions for the LDEF spacecraft, experiment trays, and, for selected trays, the components of experiments within a tray was developed for use in ionizing radiation assessments. The model is being applied to provide 3-D shielding distributions around radiation dosimeters to aid in data interpretation, particularly in assessing the directional properties of the radiation exposure. Also, the model has been interfaced with radiation transport codes for 3-D dosimetry response predictions and for calculations related to determining the accuracy of trapped proton and cosmic ray environment models. The methodology is described used in developing the 3-D LDEF model and the level of detail incorporated. Currently, the trays modeled in detail are F2, F8, and H12 and H3. Applications of the model which are discussed include the 3-D shielding distributions around various dosimeters, the influence of shielding on dosimetry responses, and comparisons of dose predictions based on the present 3-D model vs those from 1-D geometry model approximations used in initial estimates.

  11. MAGIC polymer gel for dosimetric verification in boron neutron capture therapy

    PubMed Central

    Heikkinen, Sami; Kotiluoto, Petri; Serén, Tom; Seppälä, Tiina; Auterinen, Iiro; Savolainen, Sauli

    2007-01-01

    Radiation‐sensitive polymer gels are among the most promising three‐dimensional dose verification tools developed to date. We tested the normoxic polymer gel dosimeter known by the acronym MAGIC (methacrylic and ascorbic acid in gelatin initiated by copper) to evaluate its use in boron neutron capture therapy (BNCT) dosimetry. We irradiated a large cylindrical gel phantom (diameter: 10 cm; length: 20 cm) in the epithermal neutron beam of the Finnish BNCT facility at the FiR 1 nuclear reactor. Neutron irradiation was simulated with a Monte Carlo radiation transport code MCNP. To compare dose–response, gel samples from the same production batch were also irradiated with 6 MV photons from a medical linear accelerator. Irradiated gel phantoms then underwent magnetic resonance imaging to determine their R2 relaxation rate maps. The measured and normalized dose distribution in the epithermal neutron beam was compared with the dose distribution calculated by computer simulation. The results support the feasibility of using MAGIC gel in BNCT dosimetry. PACS numbers: 87.53.Qc, 87.53.Wz, 87.66.Ff PMID:17592463

  12. Monte Carlo study of a 60Co calibration field of the Dosimetry Laboratory Seibersdorf.

    PubMed

    Hranitzky, C; Stadtmann, H

    2007-01-01

    The gamma radiation fields of the reference irradiation facility of the Dosimetry Laboratory Seibersdorf with collimated beam geometry are used for calibrating radiation protection dosemeters. A close-to-reality simulation model of the facility including the complex geometry of a 60Co source was set up using the Monte Carlo code MCNP. The goal of this study is to characterise the radionuclide gamma calibration field and resulting air-kerma distributions inside the measurement hall with a total of 20 m in length. For the whole range of source-detector-distances (SDD) along the central beam axis, simulated and measured relative air-kerma values are within +/-0.6%. Influences on the accuracy of the simulation results are investigated, including e.g., source mass density effects or detector volume dependencies. A constant scatter contribution from the lead ring-collimator of approximately 1% and an increasing scatter contribution from the concrete floor for distances above 7 m are identified, resulting in a total air-kerma scatter contribution below 5%, which is in accordance to the ISO 4037-1 recommendations.

  13. SU-F-18C-09: Assessment of OSL Dosimeter Technology in the Validation of a Monte Carlo Radiation Transport Code for CT Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carver, D; Kost, S; Pickens, D

    Purpose: To assess the utility of optically stimulated luminescent (OSL) dosimeter technology in calibrating and validating a Monte Carlo radiation transport code for computed tomography (CT). Methods: Exposure data were taken using both a standard CT 100-mm pencil ionization chamber and a series of 150-mm OSL CT dosimeters. Measurements were made at system isocenter in air as well as in standard 16-cm (head) and 32-cm (body) CTDI phantoms at isocenter and at the 12 o'clock positions. Scans were performed on a Philips Brilliance 64 CT scanner for 100 and 120 kVp at 300 mAs with a nominal beam width ofmore » 40 mm. A radiation transport code to simulate the CT scanner conditions was developed using the GEANT4 physics toolkit. The imaging geometry and associated parameters were simulated for each ionization chamber and phantom combination. Simulated absorbed doses were compared to both CTDI{sub 100} values determined from the ion chamber and to CTDI{sub 100} values reported from the OSLs. The dose profiles from each simulation were also compared to the physical OSL dose profiles. Results: CTDI{sub 100} values reported by the ion chamber and OSLs are generally in good agreement (average percent difference of 9%), and provide a suitable way to calibrate doses obtained from simulation to real absorbed doses. Simulated and real CTDI{sub 100} values agree to within 10% or less, and the simulated dose profiles also predict the physical profiles reported by the OSLs. Conclusion: Ionization chambers are generally considered the standard for absolute dose measurements. However, OSL dosimeters may also serve as a useful tool with the significant benefit of also assessing the radiation dose profile. This may offer an advantage to those developing simulations for assessing radiation dosimetry such as verification of spatial dose distribution and beam width.« less

  14. Assessment of the Partially Resolved Numerical Simulation (PRNS) Approach in the National Combustion Code (NCC) for Turbulent Nonreacting and Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2008-01-01

    This paper describes an approach which aims at bridging the gap between the traditional Reynolds-averaged Navier-Stokes (RANS) approach and the traditional large eddy simulation (LES) approach. It has the characteristics of the very large eddy simulation (VLES) and we call this approach the partially-resolved numerical simulation (PRNS). Systematic simulations using the National Combustion Code (NCC) have been carried out for fully developed turbulent pipe flows at different Reynolds numbers to evaluate the PRNS approach. Also presented are the sample results of two demonstration cases: nonreacting flow in a single injector flame tube and reacting flow in a Lean Direct Injection (LDI) hydrogen combustor.

  15. TABULATED EQUIVALENT SDR FLAMELET (TESF) MODEFL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KUNDU, PRITHWISH; AMEEN, mUHSIN MOHAMMED; UNNIKRISHNAN, UMESH

    The code consists of an implementation of a novel tabulated combustion model for non-premixed flames in CFD solvers. This novel technique/model is used to implement an unsteady flamelet tabulation without using progress variables for non-premixed flames. It also has the capability to include history effects which is unique within tabulated flamelet models. The flamelet table generation code can be run in parallel to generate tables with large chemistry mechanisms in relatively short wall clock times. The combustion model/code reads these tables. This framework can be coupled with any CFD solver with RANS as well as LES turbulence models. This frameworkmore » enables CFD solvers to run large chemistry mechanisms with large number of grids at relatively lower computational costs. Currently it has been coupled with the Converge CFD code and validated against available experimental data. This model can be used to simulate non-premixed combustion in a variety of applications like reciprocating engines, gas turbines and industrial burners operating over a wide range of fuels.« less

  16. Development of a computer code to calculate the distribution of radionuclides within the human body by the biokinetic models of the ICRP.

    PubMed

    Matsumoto, Masaki; Yamanaka, Tsuneyasu; Hayakawa, Nobuhiro; Iwai, Satoshi; Sugiura, Nobuyuki

    2015-03-01

    This paper describes the Basic Radionuclide vAlue for Internal Dosimetry (BRAID) code, which was developed to calculate the time-dependent activity distribution in each organ and tissue characterised by the biokinetic compartmental models provided by the International Commission on Radiological Protection (ICRP). Translocation from one compartment to the next is taken to be governed by first-order kinetics, which is formulated by the first-order differential equations. In the source program of this code, the conservation equations are solved for the mass balance that describes the transfer of a radionuclide between compartments. This code is applicable to the evaluation of the radioactivity of nuclides in an organ or tissue without modification of the source program. It is also possible to handle easily the cases of the revision of the biokinetic model or the application of a uniquely defined model by a user, because this code is designed so that all information on the biokinetic model structure is imported from an input file. The sample calculations are performed with the ICRP model, and the results are compared with the analytic solutions using simple models. It is suggested that this code provides sufficient result for the dose estimation and interpretation of monitoring data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; hide

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  18. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    NASA Astrophysics Data System (ADS)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  19. BUGJEFF311.BOLIB (JEFF-3.1.1) and BUGENDF70.BOLIB (ENDF/B-VII.0) - Generation Methodology and Preliminary Testing of two ENEA-Bologna Group Cross Section Libraries for LWR Shielding and Pressure Vessel Dosimetry

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Sinitsa, Valentin; Orsi, Roberto; Frisoni, Manuela

    2016-02-01

    Two broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format, dedicated to LWR shielding and pressure vessel dosimetry applications, were generated following the methodology recommended by the US ANSI/ANS-6.1.2-1999 (R2009) standard. These libraries, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, are respectively based on JEFF-3.1.1 and ENDF/B-VII.0 nuclear data and adopt the same broad-group energy structure (47 n + 20 γ) of the ORNL BUGLE-96 similar library. They were respectively obtained from the ENEA-Bologna VITJEFF311.BOLIB and VITENDF70.BOLIB libraries in AMPX format for nuclear fission applications through problem-dependent cross section collapsing with the ENEA-Bologna 2007 revision of the ORNL SCAMPI nuclear data processing system. Both previous libraries are based on the Bondarenko self-shielding factor method and have the same AMPX format and fine-group energy structure (199 n + 42 γ) as the ORNL VITAMIN-B6 similar library from which BUGLE-96 was obtained at ORNL. A synthesis of a preliminary validation of the cited BUGLE-type libraries, performed through 3D fixed source transport calculations with the ORNL TORT-3.2 SN code, is included. The calculations were dedicated to the PCA-Replica 12/13 and VENUS-3 engineering neutron shielding benchmark experiments, specifically conceived to test the accuracy of nuclear data and transport codes in LWR shielding and radiation damage analyses.

  20. A Hybrid RANS/LES Approach for Predicting Jet Noise

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.

  1. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  2. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE PAGES

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...

    2017-06-13

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  3. Les Houches ''Physics at TeV Colliders 2003'' Beyond the Standard Model Working Group: Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allanach, B

    2004-03-01

    The work contained herein constitutes a report of the ''Beyond the Standard Model'' working group for the Workshop ''Physics at TeV Colliders'', Les Houches, France, 26 May-6 June, 2003. The research presented is original, and was performed specifically for the workshop. Tools for calculations in the minimal supersymmetric standard model are presented, including a comparison of the dark matter relic density predicted by public codes. Reconstruction of supersymmetric particle masses at the LHC and a future linear collider facility is examined. Less orthodox supersymmetric signals such as non-pointing photons and R-parity violating signals are studied. Features of extra dimensional modelsmore » are examined next, including measurement strategies for radions and Higgs', as well as the virtual effects of Kaluza Klein modes of gluons. Finally, there is an update on LHC Z' studies.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruger, R.

    The US National Lung Screening Trial (NLST) was a multi-center randomized, controlled trial comparing a low-dose CT (LDCT) to posterior-anterior (PA) chest x-ray (CXR) in screening older, current and former heavy smokers for early detection of lung cancer. Recruitment was launched in September 2002 and ended in April 2004 when 53,454 participants had been randomized at 33 screening sites in equal proportions. Funded by the National Cancer Institute this trial demonstrated that LDCT screening reduced lung cancer mortality. The US Preventive Services Task Force (USPSTF) cited NLST findings and conclusions in its deliberations and analysis of lung cancer screening. Undermore » the 2010 Patient Protection and Affordable Care Act, the USPSTF favorable recommendation regarding lung cancer CT screening assisted in obtaining third-party payers coverage for screening. The objective of this session is to provide an introduction to the NLST and the trial findings, in addition to a comprehensive review of the dosimetry investigations and assessments completed using individual NLST participant CT and CXR examinations. Session presentations will review and discuss the findings of two independent assessments, a CXR assessment and the findings of a CT investigation calculating individual organ dosimetry values. The CXR assessment reviewed a total of 73,733 chest x-ray exams that were performed on 92 chest imaging systems of which 66,157 participant examinations were used. The CT organ dosimetry investigation collected scan parameters from 23,773 CT examinations; a subset of the 75,133 CT examinations performed using 97 multi-detector CT scanners. Organ dose conversion coefficients were calculated using a Monte Carlo code. An experimentally-validated CT scanner simulation was coupled with 193 adult hybrid computational phantoms representing the height and weight of the current U.S. population. The dose to selected organs was calculated using the organ dose library and the abstracted scan parameters. This session will review the results and summarize the individualized doses to major organs and the mean effective dose and CTDIvol estimate for 66,157 PA chest and 23,773 CT examinations respectively, using size-dependent computational phantoms coupled with Monte Carlo calculations. Learning Objectives: Review and summarize relevant NLST findings and conclusions. Understand the scope and scale of the NLST specific to participant dosimetry. Provide a comprehensive review of NLST participant dosimetry assessments. Summarize the results of an investigation providing individualized organ dose estimates for NLST participant cohorts.« less

  5. Laboratory Services Guide

    DTIC Science & Technology

    1994-10-01

    dosimetry services using thermoluminescent dosimeters ( TLDs ) to meet 10 CFR 19, 20, 30-36, 40 and 70; to proNide dosimetry service for environmental...USAF Personnel Dosimetry Branch. Once it is determined that area or external dosimetry is necessary, request the number of TLDs required by FAX or letter... dosimetry , Request TLDs 2 - 4 weeks in advance and always designate a control badge. The Radiation Dosimetry Branch thanks you in advance for doing everything

  6. Residus de 2-formes differentielles sur les surfaces algebriques et applications aux codes correcteurs d'erreurs

    NASA Astrophysics Data System (ADS)

    Couvreur, A.

    2009-05-01

    The theory of algebraic-geometric codes has been developed in the beginning of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic curve X over a finite field, there are two different constructions of error-correcting codes. The first one, called "functional", uses some rational functions on X and the second one, called "differential", involves some rational 1-forms on this curve. Hundreds of papers are devoted to the study of such codes. In addition, a generalization of the functional construction for algebraic varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A few papers about such codes has been published, but nothing has been done concerning a generalization of the differential construction to the higher-dimensional case. In this thesis, we propose a differential construction of codes on algebraic surfaces. Afterwards, we study the properties of these codes and particularly their relations with functional codes. A pretty surprising fact is that a main difference with the case of curves appears. Indeed, if in the case of curves, a differential code is always the orthogonal of a functional one, this assertion generally fails for surfaces. Last observation motivates the study of codes which are the orthogonal of some functional code on a surface. Therefore, we prove that, under some condition on the surface, these codes can be realized as sums of differential codes. Moreover, we show that some answers to some open problems "a la Bertini" could give very interesting informations on the parameters of these codes.

  7. Final Report for ALCC Allocation: Predictive Simulation of Complex Flow in Wind Farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barone, Matthew F.; Ananthan, Shreyas; Churchfield, Matt

    This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energymore » Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.« less

  8. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  9. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Danielle; Siegbahn, Albert; Fallone, Gin

    Purpose: The BioMedical Imaging and Therapy (BMIT) beamlines at the Canadian Light Source offer the opportunity for investigating novel imaging and therapy applications of synchrotron radiation. A necessary component in advancing this research, and in progressing toward clinical applications, is the availability of accurate dosimetry that is traceable to a standards institution. However, dosimetry in this setting is challenging. These beams are typically small, non-uniform, and highly intense. This work describes air kerma rate measurements on a BMIT beamline using a free-air ionization chamber (FAC). Methods: The measurements were taken at the 05B1-1 beamline (∼8 – 100 keV) for severalmore » beam qualities with mean energies between 20.0 and 84.0 keV. The Victoreen Model 480 cylindrical FAC, with a specially fabricated 0.52 mm diameter aperture, was used to measure air kerma rates. The required correction factors were determined using a variety of methods: tabulated data, measurements, theoretical calculations and Monte Carlo simulations (EGSnrc user code egs-fac). Results: The experimental air kerma rates measured between 0.270 ± 13.6% and 312 ± 2.7% Gy/min. At lower energies (low filtration), the most impactful correction factors were those for ion recombination and for x-ray attenuation. Conclusions: These measurements marked the first absolute dosimetry performed at the BMIT beamlines. The experimental and Monte Carlo methods developed will allow air kerma rates to be measured under other experimental conditions, provide a benchmark to which other dosimeters will be compared, and provide a reference for imaging and therapy research programs on this beamline.« less

  11. Dose Enhancement near Metal Interfaces in Synthetic Diamond Based X-ray Dosimeters

    NASA Astrophysics Data System (ADS)

    Alamoudi, Dalal

    Diamond is an attractive material for medical dosimetry due to its radiation hardness, fast response, chemical resilience, small sensitive volume, high spatial resolution, near-tissue equivalence, and energy and dose rate independence. These properties make diamond a promising material for medical dosimetry compared to other semiconductor detector materials and wider radiation detection applications. This study is focused on one of the important factors to consider in the radiation detector; the influence of dose enhancement on the photocurrent performance at metallic interfaces in synthetic diamond radiation dosimeters with carbon based electrodes as a function of bias voltages. Monte Carlo (MC) simulations with BEAMnrc code were carried out to simulate the dose enhancement factor (DEF) and compared against the equivalent photocurrent ratio from experimental investigation. MC simulations show that the sensitive region for the absorbed dose distribution covers a few micrometers distances from the interface. Experimentally, two single crystal (SC) and one polycrystalline (PC) samples with carbon based electrodes were used. The samples were each mounted inside a tissue equivalent encapsulation design in order to minimize fluence perturbations. Copper, Gold and Lead have been investigated experimentally as generators of photoelectrons using 50 kVp and 100 kVp X-rays relevant for medical dosimetry. The results show enhancement in the detectors' photocurrent performance when different metals are butted up to the diamond detector. The variation in the photocurrent ratio measurements depends on the type of diamond samples, their electrode fabrication and the applied bias voltages indicating that the dose enhancement from diamond-metal interface modifies the electronic performance of the detector.

  12. DRDC Ottawa Participation in the SILENE Accident Dosimetry Intercomparison Exercise. June 10-21, 2002

    DTIC Science & Technology

    2002-11-01

    of CaF2:Mn and A120 3 TLDs for gamma-ray dosimetry ). In addition, DRDC Ottawa has recently substantially expanded its efforts in radiation dosimetry ...use of any real- time electronic dosimeter. Foils have long been proposed and used for criticality dosimetry (as well as for general monitoring of...ray Dosimetry DRDC Ottawa offers a number (over five) of various thermoluminescence dosimetry ( TLD ) systems. The choice of any particular TLD depends

  13. Calculated and measured brachytherapy dosimetry parameters in water for the Xoft Axxent X-Ray Source: an electronic brachytherapy source.

    PubMed

    Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve

    2006-11-01

    A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1

  14. Local dynamic subgrid-scale models in channel flow

    NASA Technical Reports Server (NTRS)

    Cabot, William H.

    1994-01-01

    The dynamic subgrid-scale (SGS) model has given good results in the large-eddy simulation (LES) of homogeneous isotropic or shear flow, and in the LES of channel flow, using averaging in two or three homogeneous directions (the DA model). In order to simulate flows in general, complex geometries (with few or no homogeneous directions), the dynamic SGS model needs to be applied at a local level in a numerically stable way. Channel flow, which is inhomogeneous and wall-bounded flow in only one direction, provides a good initial test for local SGS models. Tests of the dynamic localization model were performed previously in channel flow using a pseudospectral code and good results were obtained. Numerical instability due to persistently negative eddy viscosity was avoided by either constraining the eddy viscosity to be positive or by limiting the time that eddy viscosities could remain negative by co-evolving the SGS kinetic energy (the DLk model). The DLk model, however, was too expensive to run in the pseudospectral code due to a large near-wall term in the auxiliary SGS kinetic energy (k) equation. One objective was then to implement the DLk model in a second-order central finite difference channel code, in which the auxiliary k equation could be integrated implicitly in time at great reduction in cost, and to assess its performance in comparison with the plane-averaged dynamic model or with no model at all, and with direct numerical simulation (DNS) and/or experimental data. Other local dynamic SGS models have been proposed recently, e.g., constrained dynamic models with random backscatter, and with eddy viscosity terms that are averaged in time over material path lines rather than in space. Another objective was to incorporate and test these models in channel flow.

  15. Methodes iteratives paralleles: Applications en neutronique et en mecanique des fluides

    NASA Astrophysics Data System (ADS)

    Qaddouri, Abdessamad

    Dans cette these, le calcul parallele est applique successivement a la neutronique et a la mecanique des fluides. Dans chacune de ces deux applications, des methodes iteratives sont utilisees pour resoudre le systeme d'equations algebriques resultant de la discretisation des equations du probleme physique. Dans le probleme de neutronique, le calcul des matrices des probabilites de collision (PC) ainsi qu'un schema iteratif multigroupe utilisant une methode inverse de puissance sont parallelises. Dans le probleme de mecanique des fluides, un code d'elements finis utilisant un algorithme iteratif du type GMRES preconditionne est parallelise. Cette these est presentee sous forme de six articles suivis d'une conclusion. Les cinq premiers articles traitent des applications en neutronique, articles qui representent l'evolution de notre travail dans ce domaine. Cette evolution passe par un calcul parallele des matrices des PC et un algorithme multigroupe parallele teste sur un probleme unidimensionnel (article 1), puis par deux algorithmes paralleles l'un mutiregion l'autre multigroupe, testes sur des problemes bidimensionnels (articles 2--3). Ces deux premieres etapes sont suivies par l'application de deux techniques d'acceleration, le rebalancement neutronique et la minimisation du residu aux deux algorithmes paralleles (article 4). Finalement, on a mis en oeuvre l'algorithme multigroupe et le calcul parallele des matrices des PC sur un code de production DRAGON ou les tests sont plus realistes et peuvent etre tridimensionnels (article 5). Le sixieme article (article 6), consacre a l'application a la mecanique des fluides, traite la parallelisation d'un code d'elements finis FES ou le partitionneur de graphe METIS et la librairie PSPARSLIB sont utilises.

  16. Analysis of localised dose distribution in human body by Monte Carlo code system for photon irradiation.

    PubMed

    Ohnishi, S; Odano, N; Nariyama, N; Saito, K

    2004-01-01

    In usual personal dosimetry, whole body irradiation is assumed. However, the opportunity of partial irradiation is increasing and the tendencies of protection quantities caused under those irradiation conditions are different. The code system has been developed and effective dose and organ absorbed doses have been calculated in the case of horizontal narrow photon beam irradiated from various directions at three representative body sections, 40, 50 and 60 cm originating from the top of the head. This work covers 24 beam directions, each 15 degrees angle ranging from 0 degrees to 345 degrees, three energy levels, 45 keV, 90 keV and 1.25 MeV, and three beam diameters of 1, 2 and 4 cm. These results show that the beam injected from diagonally front or other specific direction causes peak dose in the case of partial irradiation.

  17. Generation of 238U Covariance Matrices by Using the Integral Data Assimilation Technique of the CONRAD Code

    NASA Astrophysics Data System (ADS)

    Privas, E.; Archier, P.; Bernard, D.; De Saint Jean, C.; Destouche, C.; Leconte, P.; Noguère, G.; Peneliau, Y.; Capote, R.

    2016-02-01

    A new IAEA Coordinated Research Project (CRP) aims to test, validate and improve the IRDF library. Among the isotopes of interest, the modelisation of the 238U capture and fission cross sections represents a challenging task. A new description of the 238U neutrons induced reactions in the fast energy range is within progress in the frame of an IAEA evaluation consortium. The Nuclear Data group of Cadarache participates in this effort utilizing the 238U spectral indices measurements and Post Irradiated Experiments (PIE) carried out in the fast reactors MASURCA (CEA Cadarache) and PHENIX (CEA Marcoule). Such a collection of experimental results provides reliable integral information on the (n,γ) and (n,f) cross sections. This paper presents the Integral Data Assimilation (IDA) technique of the CONRAD code used to propagate the uncertainties of the integral data on the 238U cross sections of interest for dosimetry applications.

  18. Clinical implementation of total skin electron irradiation treatment with a 6 MeV electron beam in high-dose total skin electron mode

    NASA Astrophysics Data System (ADS)

    Lucero, J. F.; Rojas, J. I.

    2016-07-01

    Total skin electron irradiation (TSEI) is a special treatment technique offered by modern radiation oncology facilities, given for the treatment of mycosis fungoides, a rare skin disease, which is type of cutaneous T-cell lymphoma [1]. During treatment the patient's entire skin is irradiated with a uniform dose. The aim of this work is to present implementation of total skin electron irradiation treatment using IAEA TRS-398 code of practice for absolute dosimetry and taking advantage of the use of radiochromic films.

  19. Simulation of MeV electron energy deposition in CdS quantum dots absorbed in silicate glass for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Baharin, R.; Hobson, P. R.; Smith, D. R.

    2010-09-01

    We are currently developing 2D dosimeters with optical readout based on CdS or CdS/CdSe core-shell quantum-dots using commercially available materials. In order to understand the limitations on the measurement of a 2D radiation profile the 3D deposited energy profile of MeV energy electrons in CdS quantum-dot-doped silica glass have been studied by Monte Carlo simulation using the CASINO and PENELOPE codes. Profiles for silica glass and CdS quantum-dot-doped silica glass were then compared.

  20. Clinical implementation of total skin electron irradiation treatment with a 6 MeV electron beam in high-dose total skin electron mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucero, J. F., E-mail: fernando.lucero@hoperadiotherapy.com.gt; Hope International, Guatemala; Rojas, J. I., E-mail: isaac.rojas@siglo21.cr

    Total skin electron irradiation (TSEI) is a special treatment technique offered by modern radiation oncology facilities, given for the treatment of mycosis fungoides, a rare skin disease, which is type of cutaneous T-cell lymphoma [1]. During treatment the patient’s entire skin is irradiated with a uniform dose. The aim of this work is to present implementation of total skin electron irradiation treatment using IAEA TRS-398 code of practice for absolute dosimetry and taking advantage of the use of radiochromic films.

  1. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  2. Hydroxyl radical-PLIF measurements and accuracy investigation in high pressure gaseous hydrogen/gaseous oxygen combustion

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Aravind

    In-flow species concentration measurements in reacting flows at high pressures are needed both to improve the current understanding of the physical processes taking place and to validate predictive tools that are under development, for application to the design and optimization of a range of power plants from diesel to rocket engines. To date, non intrusive measurements have been based on calibrations determined from assumptions that were not sufficiently quantified to provide a clear understanding of the range of uncertainty associated with these measurements. The purpose of this work is to quantify the uncertainties associated with OH measurement in a oxygen-hydrogen system produced by a shear, coaxial injector typical of those used in rocket engines. Planar OH distributions are obtained providing instantaneous and averaged distribution that are required for both LES and RANS codes currently under development. This study has evaluated the uncertainties associated with OH measurement at 10, 27, 37 and 53 bar respectively. The total rms error for OH-PLIF measurements from eighteen different parameters was quantified and found as 21.9, 22.8, 22.5, and 22.9% at 10, 27, 37 and 53 bar respectively. These results are used by collaborators at Georgia Institute of Technology (LES), Pennsylvania State University (LES), University of Michigan (RANS) and NASA Marshall (RANS).

  3. High speed turbulent reacting flows: DNS and LES

    NASA Technical Reports Server (NTRS)

    Givi, Peyman

    1990-01-01

    Work on understanding the mechanisms of mixing and reaction in high speed turbulent reacting flows was continued. Efforts, in particular, were concentrated on taking advantage of modern computational methods to simulate high speed turbulent flows. In doing so, two methodologies were used: large eddy simulations (LES) and direct numerical simulations (DNS). In the work related with LES the objective is to study the behavior of the probability density functions (pdfs) of scalar properties within the subgrid in reacting turbulent flows. The data base obtained by DNS for a detailed study of the pdf characteristics within the subgrid was used. Simulations are performed for flows under various initializations to include the effects of compressibility on mixing and chemical reactions. In the work related with DNS, a two-dimensional temporally developing high speed mixing layer under the influence of a second-order non-equilibrium chemical reaction of the type A + B yields products + heat was considered. Simulations were performed with different magnitudes of the convective Mach numbers and with different chemical kinetic parameters for the purpose of examining the isolated effects of the compressibility and the heat released by the chemical reactions on the structure of the layer. A full compressible code was developed and utilized, so that the coupling between mixing and chemical reactions is captured in a realistic manner.

  4. The influence of misrepresenting the nocturnal boundary layer on idealized daytime convection in large-eddy simulation

    NASA Astrophysics Data System (ADS)

    van Stratum, Bart J. H.; Stevens, Bjorn

    2015-06-01

    The influence of poorly resolving mixing processes in the nocturnal boundary layer (NBL) on the development of the convective boundary layer the following day is studied using large-eddy simulation (LES). Guided by measurement data from meteorological sites in Cabauw (Netherlands) and Hamburg (Germany), the typical summertime NBL conditions for Western Europe are characterized, and used to design idealized (absence of moisture and large-scale forcings) numerical experiments of the diel cycle. Using the UCLA-LES code with a traditional Smagorinsky-Lilly subgrid model and a simplified land-surface scheme, a sensitivity study to grid spacing is performed. At horizontal grid spacings ranging from 3.125 m in which we are capable of resolving most turbulence in the cases of interest to grid a spacing of 100 m which is clearly insufficient to resolve the NBL, the ability of LES to represent the NBL and the influence of NBL biases on the subsequent daytime development of the convective boundary layer are examined. Although the low-resolution experiments produce substantial biases in the NBL, the influence on daytime convection is shown to be small, with biases in the afternoon boundary layer depth and temperature of approximately 100 m and 0.5 K, which partially cancel each other in terms of the mixed-layer top relative humidity.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, B. E.; Olson, B. J.; White, J. E.

    High-fidelity large eddy simulation (LES) of a low-Atwood number (A = 0.05) Rayleigh-Taylor mixing layer is performed using the tenth-order compact difference code Miranda. An initial multimode perturbation spectrum is specified in Fourier space as a function of mesh resolution such that a database of results is obtained in which each successive level of increased grid resolution corresponds approximately to one additional doubling of the mixing layer width, or generation. The database is then analyzed to determine approximate requirements for self-similarity, and a new metric is proposed to quantify how far a given simulation is from the limit of self-similarity.more » It is determined that mixing layer growth reaches a high degree of self-similarity after approximately 4.5 generations. Statistical convergence errors and boundary effects at late time, however, make it impossible to draw similar conclusions regarding the self-similar growth of more sensitive turbulence parameters. Finally, self-similar turbulence profiles from the LES database are compared with one-dimensional simulations using the k-L-a and BHR-2 Reynolds-averaged Navier-Stokes (RANS) models. The k-L-a model, which is calibrated to reproduce a quadratic turbulence kinetic energy profile for a self-similar mixing layer, is found to be in better agreement with the LES than BHR-2 results.« less

  6. LES of stratified-wavy flows using novel near-interface treatment

    NASA Astrophysics Data System (ADS)

    Karnik, Aditya; Kahouadji, Lyes; Chergui, Jalel; Juric, Damir; Shin, Seungwon; Matar, Omar K.

    2017-11-01

    The pressure drop in horizontal stratified wavy flows is influenced by interfacial shear stress. The near-interface behavior of the lighter phase is akin to that near a moving wall. We employ a front-tracking code, Blue, to simulate and capture the near-interface behaviour of both phases. Blue uses a modified Smagorinsky LES model incorporating a novel near-interface treatment for the sub-grid viscosity, which is influenced by damping due to the wall-like interface, and enhancement of the turbulent kinetic energy (TKE) due to the interfacial waves. Simulations are carried out for both air-water and oil-water stratified configurations to demonstrate the applicability of the present method. The mean velocities and tangential Reynolds stresses are compared with experiments for both configurations. At the higher Re, the waves penetrate well into the buffer region of the boundary layer above the interface thus altering its dynamics. Previous attempts to capture the secondary structures associated with such flows using RANS or standard LES methodologies have been unsuccessful. The ability of the present method to reproduce these structures is due to the correct estimation of the near-interface TKE governing energy transfer from the normal to tangential directions. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).

  7. TU-F-201-00: Radiochromic Film Dosimetry Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Since the introduction of radiochromic films (RCF) for radiation dosimetry, the scope of RCF dosimetry has expanded steadily to include many medical applications, such as radiation therapy and diagnostic radiology. The AAPM Task Group (TG) 55 published a report on the recommendations for RCF dosimetry in 1998. As the technology is advancing rapidly, and its routine clinical use is expanding, TG 235 has been formed to provide an update to TG-55 on radiochromic film dosimetry. RCF dosimetry applications in clinical radiotherapy have become even more widespread, expanding from primarily brachytherapy and radiosurgery applications, and gravitating towards (but not limited to)more » external beam therapy (photon, electron and protons), such as quality assurance for IMRT, VMAT, Tomotherapy, SRS/SRT, and SBRT. In addition, RCF applications now extend to measurements of radiation dose in particle beams and patients undergoing medical exams, especially fluoroscopically guided interventional procedures and CT. The densitometers/scanners used for RCF dosimetry have also evolved from the He-Ne laser scanner to CCD-based scanners, including roller-based scanner, light box-based digital camera, and flatbed color scanner. More recently, multichannel RCF dosimetry introduced a new paradigm for external beam dose QA for its high accuracy and efficiency. This course covers in detail the recent advancements in RCF dosimetry. Learning Objectives: Introduce the paradigm shift on multichannel film dosimetry Outline the procedures to achieve accurate dosimetry with a RCF dosimetry system Provide comprehensive guidelines on RCF dosimetry for various clinical applications One of the speakers has a research agreement from Ashland Inc., the manufacturer of Gafchromic film.« less

  8. TU-F-201-01: General Aspects of Radiochromic Film Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niroomand-Rad, A.

    Since the introduction of radiochromic films (RCF) for radiation dosimetry, the scope of RCF dosimetry has expanded steadily to include many medical applications, such as radiation therapy and diagnostic radiology. The AAPM Task Group (TG) 55 published a report on the recommendations for RCF dosimetry in 1998. As the technology is advancing rapidly, and its routine clinical use is expanding, TG 235 has been formed to provide an update to TG-55 on radiochromic film dosimetry. RCF dosimetry applications in clinical radiotherapy have become even more widespread, expanding from primarily brachytherapy and radiosurgery applications, and gravitating towards (but not limited to)more » external beam therapy (photon, electron and protons), such as quality assurance for IMRT, VMAT, Tomotherapy, SRS/SRT, and SBRT. In addition, RCF applications now extend to measurements of radiation dose in particle beams and patients undergoing medical exams, especially fluoroscopically guided interventional procedures and CT. The densitometers/scanners used for RCF dosimetry have also evolved from the He-Ne laser scanner to CCD-based scanners, including roller-based scanner, light box-based digital camera, and flatbed color scanner. More recently, multichannel RCF dosimetry introduced a new paradigm for external beam dose QA for its high accuracy and efficiency. This course covers in detail the recent advancements in RCF dosimetry. Learning Objectives: Introduce the paradigm shift on multichannel film dosimetry Outline the procedures to achieve accurate dosimetry with a RCF dosimetry system Provide comprehensive guidelines on RCF dosimetry for various clinical applications One of the speakers has a research agreement from Ashland Inc., the manufacturer of Gafchromic film.« less

  9. Thermoluminescent dosimetry in veterinary diagnostic radiology.

    PubMed

    Hernández-Ruiz, L; Jimenez-Flores, Y; Rivera-Montalvo, T; Arias-Cisneros, L; Méndez-Aguilar, R E; Uribe-Izquierdo, P

    2012-12-01

    This paper presents the results of Environmental and Personnel Dosimetry made in a radiology area of a veterinary hospital. Dosimetry was realized using thermoluminescent (TL) materials. Environmental Dosimetry results show that areas closer to the X-ray equipment are safe. Personnel Dosimetry shows important measurements of daily workday in some persons near to the limit established by ICRP. TL results of radiation measurement suggest TLDs are good candidates as a dosimeter to radiation dosimetry in veterinary radiology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Evaluation of Simulated RADARSAT-2 Polarimetry Products

    DTIC Science & Technology

    2007-09-01

    comparativement à l’utilisation d’un radar monocanal pour détecter des navires ainsi que les avantages possibles de la décomposition des cibles polarimétriques...calculation of the ROC can be applied to any designed probability of false alarm, such as PFA = 10-9, provided there are enough ocean samples available. The... designation , trade name, military project code name, geographic location may also be included. If possible keywords should be selected from a published

  11. Error Processing Techniques for the Modified Read Facsimile Code.

    DTIC Science & Technology

    1981-09-01

    suppose la cr~ation d’un support commun pour tout** les informations, une vtritable " Banque do dones". repartic sur des moyens detraitement nationaux...Veisctnce dv 20 rdgions. Dos calculateurs ant #td implant~s done le pass# au main@ dans toutem lea plus important**. On trouve ainsi des machines Bull...rA, un certain nombre dlentre eux devront n~cessairement etre mc- cesaiblos, voire min & jour en temps rdel :parmi, ccs derniers le fichier commercial

  12. Development of the Glenn-Heat-Transfer (Glenn-HT) Computer Code to Enable Time-Filtered Navier Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.

  13. Coupling the Weather Research and Forecasting (WRF) model and Large Eddy Simulations with Actuator Disk Model: predictions of wind farm power production

    NASA Astrophysics Data System (ADS)

    Garcia Cartagena, Edgardo Javier; Santoni, Christian; Ciri, Umberto; Iungo, Giacomo Valerio; Leonardi, Stefano

    2015-11-01

    A large-scale wind farm operating under realistic atmospheric conditions is studied by coupling a meso-scale and micro-scale models. For this purpose, the Weather Research and Forecasting model (WRF) is coupled with an in-house LES solver for wind farms. The code is based on a finite difference scheme, with a Runge-Kutta, fractional step and the Actuator Disk Model. The WRF model has been configured using seven one-way nested domains where the child domain has a mesh size one third of its parent domain. A horizontal resolution of 70 m is used in the innermost domain. A section from the smallest and finest nested domain, 7.5 diameters upwind of the wind farm is used as inlet boundary condition for the LES code. The wind farm consists in six-turbines aligned with the mean wind direction and streamwise spacing of 10 rotor diameters, (D), and 2.75D in the spanwise direction. Three simulations were performed by varying the velocity fluctuations at the inlet: random perturbations, precursor simulation, and recycling perturbation method. Results are compared with a simulation on the same wind farm with an ideal uniform wind speed to assess the importance of the time varying incoming wind velocity. Numerical simulations were performed at TACC (Grant CTS070066). This work was supported by NSF, (Grant IIA-1243482 WINDINSPIRE).

  14. Development of the Glenn Heat-Transfer (Glenn-HT) Computer Code to Enable Time-Filtered Navier-Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.

  15. Large-Eddy Simulation of Conductive Flows at Low Magnetic Reynolds Number

    NASA Technical Reports Server (NTRS)

    Knaepen, B.; Moin, P.

    2003-01-01

    In this paper we study the LES method with dynamic procedure in the context of conductive flows subject to an applied external magnetic field at low magnetic Reynolds number R(sub m). These kind of flows are encountered in many industrial applications. For example, in the steel industry, applied magnetic fields can be used to damp turbulence in the casting process. In nuclear fusion devices (Tokamaks), liquid-lithium flows are used as coolant blankets and interact with the surrounding magnetic field that drives and confines the fusion plasma. Also, in experimental facilities investigating the dynamo effect, the flow consists of liquid-sodium for which the Prandtl number and, as a consequence, the magnetic Reynolds number is low. Our attention is focused here on the case of homogeneous (initially isotropic) decaying turbulence. The numerical simulations performed mimic the thought experiment described in Moffatt in which an initially homogeneous isotropic conductive flow is suddenly subjected to an applied magnetic field and freely decays without any forcing. Note that this flow was first studied numerically by Schumann. It is well known that in that case, extra damping of turbulence occurs due to the Joule effect and that the flow tends to become progressively independent of the coordinate along the direction of the magnetic field. Our comparison of filtered direct numerical simulation (DNS) predictions and LES predictions show that the dynamic Smagorinsky model enables one to capture successfully the flow with LES, and that it automatically incorporates the effect of the magnetic field on the turbulence. Our paper is organized as follows. In the next section we summarize the LES approach in the case of MHD turbulence at low R(sub m) and recall the definition of the dynamic Smagorinsky model. In Sec. 3 we describe the parameters of the numerical experiments performed and the code used. Section 4 is devoted to the comparison of filtered DNS results and LES results. Conclusions are presented in Sec. 5.

  16. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    NASA Astrophysics Data System (ADS)

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  17. Using multi-country household surveys to understand who provides reproductive and maternal health services in low- and middle-income countries: a critical appraisal of the Demographic and Health Surveys

    PubMed Central

    Footman, K; Benova, L; Goodman, C; Macleod, D; Lynch, C A; Penn-Kekana, L; Campbell, O M R

    2015-01-01

    Objective The Demographic and Health Surveys (DHS) are a vital data resource for cross-country comparative analyses. This study is part of a set of analyses assessing the types of providers being used for reproductive and maternal health care across 57 countries. Here, we examine some of the challenges encountered using DHS data for this purpose, present the provider classification we used, and provide recommendations to enable more detailed and accurate cross-country comparisons of healthcare provision. Methods We used the most recent DHS surveys between 2000 and 2012; 57 countries had data on family planning and delivery care providers and 47 countries had data on antenatal care. Every possible response option across the 57 countries was listed and categorised. We then developed a classification to group provider response options according to two key dimensions: clinical nature and profit motive. Results We classified the different types of maternal and reproductive healthcare providers, and the individuals providing care. Documented challenges encountered during this process were limitations inherent in household survey data based on respondents’ self-report; conflation of response options in the questionnaire or at the data processing stage; category errors of the place vs. professional for delivery; inability to determine whether care received at home is from the public or private sector; a large number of negligible response options; inconsistencies in coding and analysis of data sets; and the use of inconsistent headings. Conclusions To improve clarity, we recommend addressing issues such as conflation of response options, data on public vs. private provider, inconsistent coding and obtaining metadata. More systematic and standardised collection of data would aid international comparisons of progress towards improved financial protection, and allow us to better characterise the incentives and commercial nature of different providers. Objectif Les enquêtes démographiques et de santé (EDS) sont une ressource vitale de données pour des analyses comparatives entre les pays. Cet article fait partie d'une série d'analyses évaluant les types de prestataires utilisés pour les soins de santé reproductive et maternelle dans 57 pays. Ici, nous examinons certains des défis rencontrés, en utilisant les données EDS à cette fin, présentons la classification que nous avons utilisée pour les prestataires et fournissons des recommandations pour permettre des comparaisons plus détaillées et précises entre les pays sur la prestation des soins de santé. Méthodes Nous avons utilisé les plus récents relevés EDS entre 2000 et 2012; 57 pays avaient des données sur la planification familiale et les prestataires de soins d'accouchement et 47 pays avaient des données sur les soins prénatals. Chaque option de réponse possible dans les 57 pays a été répertoriée et classée. Nous avons ensuite développé une classification pour grouper les options de réponses des prestataires selon deux dimensions clés: la nature clinique et la recherche du profit. Résultats Nous avons classé les différents types de prestataires de soins de santé maternelle et reproductive, et les personnes qui fournissent des soins. Les défis documentées rencontrées durant ce processus étaient les limitations inhérentes aux données de l'enquête sur les ménages sur la base de l'auto-report des répondants, l'amalgame d'options de réponse dans le questionnaire ou à l’étape de traitement des données, les erreurs de catégories du lieu par rapport à la profession pour l'accouchement, l'incapacité à déterminer si les soins reçus à domicile étaient du secteur public ou privé, un grand nombre d'options de réponse négligeables, des incohérences dans le codage et l'analyse des ensembles de données, et l'utilisation de rubriques incompatibles. Conclusions Pour améliorer la clarté, nous recommandons de tacler les problèmes tels que l'amalgame d'options de réponses, les données sur les prestataires du public par rapport à ceux du privé, l'incohérence dans le codage et l'obtention de métadonnées. Plus de collecte systématique et standardisée des données aiderait les comparaisons internationales des progrès vers une meilleure protection financière et nous permettra de mieux caractériser les incitations et la nature commerciale des différents prestataires. Objetivo Las Encuestas Demográficas y de Salud (EDS) son una fuente de datos vitales para el análisis comparativo entre países. Este artículo es parte de un grupo de análisis que evalúan los tipos de proveedores de atención a la salud reproductiva y materna que están siendo utilizados en 57 países. Examinamos algunos de los retos encontrados al utilizar datos de EDS con este propósito, presentamos la clasificación de proveedores que hemos usado, y proveemos recomendaciones para permitir una comparación más detallada y más precisa de la prestación de servicios sanitarios en diferentes países. Métodos Hemos utilizado datos de las EDS más recientes, entre el 2000 y 2012; 57 países tenían datos sobre planeación familiar y proveedores de servicios durante el parto y 47 países tenían datos sobre cuidados prenatales. Cada opción posible de respuesta en los 57 países fue listada y categorizada. Después se desarrolló una clasificación para agrupar las opciones de respuesta según proveedor, siguiendo dos dimensiones clave: naturaleza clínica y afán de lucro. Resultados Hemos clasificado los diferentes tipos de proveedores de cuidados sanitarios en salud materna y reproductiva, y a los individuos que ofrecían los servicios. Los retos documentados durante este proceso fueron las limitaciones inherentes a los datos en las encuestas realizadas en los hogares basados en las auto-respuestas de los encuestados; fusión de las opciones de respuesta en el cuestionario o durante la etapa de procesamiento de datos; errores de categoría sobre el lugar versus profesional que atendió el parto; incapacidad para determinar si los cuidados recibidos en el hogar eran del sector público o privado; un gran número de opciones de respuesta insignificantes; inconsistencias en la codificación y el análisis del conjunto de datos; y uso de encabezamientos inconsistentes. Conclusiones Para mejorar la claridad, recomendamos abordar cuestiones tales como la fusión de opciones de respuesta, datos sobre el proveedor público versus privado, codificación inconsistente, y la obtención de metadatos. Una recolección de datos más sistemática y estandarizada facilitaría las comparaciones internacionales del progreso hacia una protección financiera mejorada, y nos permitiría una mejor caracterización de las iniciativas y de la naturaleza comercial de los diferentes proveedores. PMID:25641212

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu-Tsao, S.

    Since the introduction of radiochromic films (RCF) for radiation dosimetry, the scope of RCF dosimetry has expanded steadily to include many medical applications, such as radiation therapy and diagnostic radiology. The AAPM Task Group (TG) 55 published a report on the recommendations for RCF dosimetry in 1998. As the technology is advancing rapidly, and its routine clinical use is expanding, TG 235 has been formed to provide an update to TG-55 on radiochromic film dosimetry. RCF dosimetry applications in clinical radiotherapy have become even more widespread, expanding from primarily brachytherapy and radiosurgery applications, and gravitating towards (but not limited to)more » external beam therapy (photon, electron and protons), such as quality assurance for IMRT, VMAT, Tomotherapy, SRS/SRT, and SBRT. In addition, RCF applications now extend to measurements of radiation dose in particle beams and patients undergoing medical exams, especially fluoroscopically guided interventional procedures and CT. The densitometers/scanners used for RCF dosimetry have also evolved from the He-Ne laser scanner to CCD-based scanners, including roller-based scanner, light box-based digital camera, and flatbed color scanner. More recently, multichannel RCF dosimetry introduced a new paradigm for external beam dose QA for its high accuracy and efficiency. This course covers in detail the recent advancements in RCF dosimetry. Learning Objectives: Introduce the paradigm shift on multichannel film dosimetry Outline the procedures to achieve accurate dosimetry with a RCF dosimetry system Provide comprehensive guidelines on RCF dosimetry for various clinical applications One of the speakers has a research agreement from Ashland Inc., the manufacturer of Gafchromic film.« less

  19. Characterization and Simulation of a New Design Parallel-Plate Ionization Chamber for CT Dosimetry at Calibration Laboratories

    NASA Astrophysics Data System (ADS)

    Perini, Ana P.; Neves, Lucio P.; Maia, Ana F.; Caldas, Linda V. E.

    2013-12-01

    In this work, a new extended-length parallel-plate ionization chamber was tested in the standard radiation qualities for computed tomography established according to the half-value layers defined at the IEC 61267 standard, at the Calibration Laboratory of the Instituto de Pesquisas Energéticas e Nucleares (IPEN). The experimental characterization was made following the IEC 61674 standard recommendations. The experimental results obtained with the ionization chamber studied in this work were compared to those obtained with a commercial pencil ionization chamber, showing a good agreement. With the use of the PENELOPE Monte Carlo code, simulations were undertaken to evaluate the influence of the cables, insulator, PMMA body, collecting electrode, guard ring, screws, as well as different materials and geometrical arrangements, on the energy deposited on the ionization chamber sensitive volume. The maximum influence observed was 13.3% for the collecting electrode, and regarding the use of different materials and design, the substitutions showed that the original project presented the most suitable configuration. The experimental and simulated results obtained in this work show that this ionization chamber has appropriate characteristics to be used at calibration laboratories, for dosimetry in standard computed tomography and diagnostic radiology quality beams.

  20. Neutron spectrometry and dosimetry in 100 and 300 MeV quasi-mono-energetic neutron field at RCNP, Osaka University, Japan

    NASA Astrophysics Data System (ADS)

    Mares, Vladimir; Trinkl, Sebastian; Iwamoto, Yosuke; Masuda, Akihiko; Matsumoto, Tetsuro; Hagiwara, Masayuki; Satoh, Daiki; Yashima, Hiroshi; Shima, Tatsushi; Nakamura, Takashi

    2017-09-01

    This paper describes the results of neutron spectrometry and dosimetry measurements using an extended range Bonner Sphere Spectrometer (ERBSS) with 3He proportional counter performed in quasi-mono-energetic neutron fields at the ring cyclotron facility of the Research Center for Nuclear Physics (RCNP), Osaka University, Japan. Using 100 MeV and 296 MeV proton beams, neutron fields with nominal peak energies of 96 MeV and 293 MeV were generated via 7Li(p,n)7Be reactions. Neutrons produced at 0° and 25° emission angles were extracted into the 100 m long time-of-flight (TOF) tunnel, and the energy spectra were measured at a distance of 35 m from the target. To deduce the corresponding neutron spectra from thermal to the nominal maximum energy, the ERBSS data were unfolded using the MSANDB unfolding code. At high energies, the neutron spectra were also measured by means of the TOF method using NE213 organic liquid scintillators. The results are discussed in terms of ambient dose equivalent, H*(10), and compared with the readings of other instruments operated during the experiment.

  1. Diagnostic x-ray dosimetry using Monte Carlo simulation.

    PubMed

    Ioppolo, J L; Price, R I; Tuchyna, T; Buckley, C E

    2002-05-21

    An Electron Gamma Shower version 4 (EGS4) based user code was developed to simulate the absorbed dose in humans during routine diagnostic radiological procedures. Measurements of absorbed dose using thermoluminescent dosimeters (TLDs) were compared directly with EGS4 simulations of absorbed dose in homogeneous, heterogeneous and anthropomorphic phantoms. Realistic voxel-based models characterizing the geometry of the phantoms were used as input to the EGS4 code. The voxel geometry of the anthropomorphic Rando phantom was derived from a CT scan of Rando. The 100 kVp diagnostic energy x-ray spectra of the apparatus used to irradiate the phantoms were measured, and provided as input to the EGS4 code. The TLDs were placed at evenly spaced points symmetrically about the central beam axis, which was perpendicular to the cathode-anode x-ray axis at a number of depths. The TLD measurements in the homogeneous and heterogenous phantoms were on average within 7% of the values calculated by EGS4. Estimates of effective dose with errors less than 10% required fewer numbers of photon histories (1 x 10(7)) than required for the calculation of dose profiles (1 x 10(9)). The EGS4 code was able to satisfactorily predict and thereby provide an instrument for reducing patient and staff effective dose imparted during radiological investigations.

  2. Diagnostic x-ray dosimetry using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Ioppolo, J. L.; Price, R. I.; Tuchyna, T.; Buckley, C. E.

    2002-05-01

    An Electron Gamma Shower version 4 (EGS4) based user code was developed to simulate the absorbed dose in humans during routine diagnostic radiological procedures. Measurements of absorbed dose using thermoluminescent dosimeters (TLDs) were compared directly with EGS4 simulations of absorbed dose in homogeneous, heterogeneous and anthropomorphic phantoms. Realistic voxel-based models characterizing the geometry of the phantoms were used as input to the EGS4 code. The voxel geometry of the anthropomorphic Rando phantom was derived from a CT scan of Rando. The 100 kVp diagnostic energy x-ray spectra of the apparatus used to irradiate the phantoms were measured, and provided as input to the EGS4 code. The TLDs were placed at evenly spaced points symmetrically about the central beam axis, which was perpendicular to the cathode-anode x-ray axis at a number of depths. The TLD measurements in the homogeneous and heterogenous phantoms were on average within 7% of the values calculated by EGS4. Estimates of effective dose with errors less than 10% required fewer numbers of photon histories (1 × 107) than required for the calculation of dose profiles (1 × 109). The EGS4 code was able to satisfactorily predict and thereby provide an instrument for reducing patient and staff effective dose imparted during radiological investigations.

  3. Spatial Disorientation in Military Vehicles: Causes, Consequences and Cures (Desorientation spaiale dans les vehicules militaires: causes, consequences et remedes)

    DTIC Science & Technology

    2003-02-01

    servcice warfighters (Training devices and protocols, Onboard equipment, Cognitive and sensorimotor aids, Visual and auditory symbology, Peripheral visual...vestibular stimulation causing a decrease in cerebral blood pressure with the consequent reduction in G-tolerance and increased likelihood of ALOC or GLOC...tactile stimulators (e.g. one providing a sensation of movement) or of displays with a more complex coding (e.g. by increase in the number of tactors, or

  4. Non-Cooperative Air Target Identification Using Radar (l’Identification radar des cibles aeriennes non cooperatives)

    DTIC Science & Technology

    1998-11-01

    are already operational in the radar domain , e.g. in airborne radars. NATO fighter aircraft are equipped with transponder systems answering on...Mise en forme et 6talonnage des donn6es SER moyenne pour un domaine de fr6quence (bande passante du code utilis6) et un secteur Ce module extrait les...cooperatives) Papers presented at the Symposium of the RTO Systems Concepts and Integration Panel (SCI) held in Mannheim, Germany, 22-24 April 1998. 1

  5. TH-CD-BRA-07: MRI-Linac Dosimetry: Parameters That Change in a Magnetic Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Brien, D. J.; Sawakuchi, G. O.

    Purpose: In MRI-linac integrated systems, the presence of the magnetic (B-)field has a large impact of the dose-distribution and the dose-responses of detectors; yet established protocols and previous experience may lead to assumptions about the commissioning process that are no longer valid. This study quantifies parameters that change when performing dosimetry with an MRI-linac including beam quality specifiers and the effective-point-of-measurement (EPOM) of ionization chambers. Methods: We used the Geant4 Monte Carlo code for this work with physics parameters that pass the Fano cavity test to within 0.1% for the simulated conditions with and without a 1.5 T B-field. Amore » point source model with the energy distribution of an MRI-linac beam was used with and without the B-field to calculate the beam quality specifiers %dd(10)× and TPR{sup 20}{sub 10}, the variation of chamber response with orientation and the how the B-field affects the EPOM of ionization chambers by comparing depth-dose curves calculated in water to those generated by a model PTW30013 Farmer chamber. Results: The %dd(10)× changes by over 2% in the presence of the B-field while the TPR{sup 20}{sub 10} is unaffected. Ionization chamber dose-response is known to depend on the orientation w.r.t. the B-field, but two alternative perpendicular orientations (anti-parallel to each other) also differ in dose-response by over 1%. The B-field shifts the EPOM downstream (closer to the chamber center) but it is also shifted laterally by 0.27 times the chamber’s cavity radius. Conclusion: The EPOM is affected by the B-field and it even shifts laterally. The relationship between %dd(10)× and the Spencer-Attix stopping powers is also changed. Care must be taken when using chambers perpendicular to the field as the dose-response changes depending on which perpendicular orientation is used. All of these effects must be considered when performing dosimetry in B-fields and should be accounted for in future dosimetry protocols. This project was partially funded by Elekta Ltd.« less

  6. Organ dose calculations by Monte Carlo modeling of the updated VCH adult male phantom against idealized external proton exposure

    NASA Astrophysics Data System (ADS)

    Zhang, Guozhi; Liu, Qian; Zeng, Shaoqun; Luo, Qingming

    2008-07-01

    The voxel-based visible Chinese human (VCH) adult male phantom has offered a high-quality test bed for realistic Monte Carlo modeling in radiological dosimetry simulations. The phantom has been updated in recent effort by adding newly segmented organs, revising walled and smaller structures as well as recalibrating skeletal marrow distributions. The organ absorbed dose against external proton exposure was calculated at a voxel resolution of 2 × 2 × 2 mm3 using the MCNPX code for incident energies from 20 MeV to 10 GeV and for six idealized irradiation geometries: anterior-posterior (AP), posterior-anterior (PA), left-lateral (LLAT), right-lateral (RLAT), rotational (ROT) and isotropic (ISO), respectively. The effective dose on the VCH phantom was derived in compliance with the evaluation scheme for the reference male proposed in the 2007 recommendations of the International Commission on Radiological Protection (ICRP). Algorithm transitions from the revised radiation and tissue weighting factors are accountable for approximately 90% and 10% of effective dose discrepancies in proton dosimetry, respectively. Results are tabulated in terms of fluence-to-dose conversion coefficients for practical use and are compared with data from other models available in the literature. Anatomical variations between various computational phantoms lead to dose discrepancies ranging from a negligible level to 100% or more at proton energies below 200 MeV, corresponding to the spatial geometric locations of individual organs within the body. Doses show better agreement at higher energies and the deviations are mostly within 20%, to which the organ volume and mass differences should be of primary responsibility. The impact of body size on dose distributions was assessed by dosimetry of a scaled-up VCH phantom that was resized in accordance with the height and total mass of the ICRP reference man. The organ dose decreases with the directionally uniform enlargement of voxels. Potential pathways to improve the VCH phantom have also been briefly addressed. This work pertains to VCH-based systematic multi-particle dose investigations and will contribute to comparative dosimetry studies of ICRP standardized voxel phantoms in the near future.

  7. 10 CFR 835.1304 - Nuclear accident dosimetry.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Nuclear accident dosimetry. 835.1304 Section 835.1304... Nuclear accident dosimetry. (a) Installations possessing sufficient quantities of fissile material to... nuclear accident is possible, shall provide nuclear accident dosimetry for those individuals. (b) Nuclear...

  8. 10 CFR 835.1304 - Nuclear accident dosimetry.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Nuclear accident dosimetry. 835.1304 Section 835.1304... Nuclear accident dosimetry. (a) Installations possessing sufficient quantities of fissile material to... nuclear accident is possible, shall provide nuclear accident dosimetry for those individuals. (b) Nuclear...

  9. 10 CFR 835.1304 - Nuclear accident dosimetry.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Nuclear accident dosimetry. 835.1304 Section 835.1304... Nuclear accident dosimetry. (a) Installations possessing sufficient quantities of fissile material to... nuclear accident is possible, shall provide nuclear accident dosimetry for those individuals. (b) Nuclear...

  10. 10 CFR 835.1304 - Nuclear accident dosimetry.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Nuclear accident dosimetry. 835.1304 Section 835.1304... Nuclear accident dosimetry. (a) Installations possessing sufficient quantities of fissile material to... nuclear accident is possible, shall provide nuclear accident dosimetry for those individuals. (b) Nuclear...

  11. 10 CFR 835.1304 - Nuclear accident dosimetry.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Nuclear accident dosimetry. 835.1304 Section 835.1304... Nuclear accident dosimetry. (a) Installations possessing sufficient quantities of fissile material to... nuclear accident is possible, shall provide nuclear accident dosimetry for those individuals. (b) Nuclear...

  12. FERRET-SAND II physics-dosimetry analysis for N Reactor Pressure Tubes 2954, 3053 and 1165 using a WIMS calculated input spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McElroy, W.N.; Kellogg, L.S.; Matsumoto, W.Y.

    1988-05-01

    This report is in response to a request from Westinghouse Hanford Company (WHC) that the PNL National Dosimetry Center (NDC) perform physics-dosimetry analyses (E > MeV) for N Reactor Pressure Tubes 2954 and 3053. As a result of these analyses, and recommendations for additional studies, two physics-dosimetry re-evaluations for Pressure Tube 1165 were also accomplished. The primary objective of Pacific Northwest Laboratories' (PNL) National Dosimetry Center (NDC) physics-dosimetry work for N Reactor was to provide FERRET-SAND II physics-dosimetry results to assist in the assessment of neutron radiation-induced changes in the physical and mechanical properties of N Reactor pressure tubes. 15more » refs., 6 figs., 5 tabs.« less

  13. Optimization of the Temporal Pattern of Applied Radiation Dose: Implication for the Treatment of Prostate Cancer

    DTIC Science & Technology

    2009-03-01

    environment II.A: Characterization of dosimetry in IMRT radiobiological experiment phantom using TLDs and film. (7-10 mos.) Objectives: 1... dosimetry with TLDs and film. (8-10 mos.) 4. Analysis of measured dosimetry with TLDs and film compared to predicted dosimetry from treatment...cells were). Dosimetry in the phantom was assessed with film and monitor units were calculated accordingly to deliver the desired dose. Once in

  14. Characterising an aluminium oxide dosimetry system.

    PubMed

    Conheady, Clement F; Gagliardi, Frank M; Ackerly, Trevor

    2015-09-01

    In vivo dosimetry is recommended as a defence-in-depth strategy in radiotherapy treatments and is currently employed by clinics around the world. The characteristics of a new optically stimulated luminescence dosimetry system were investigated for the purpose of replacing an aging thermoluminescence dosimetry system for in vivo dosimetry. The stability of the system was not sufficient to satisfy commissioning requirements and therefore it has not been released into clinical service at this time.

  15. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    NASA Astrophysics Data System (ADS)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  16. Mathematical modelling of scanner-specific bowtie filters for Monte Carlo CT dosimetry

    NASA Astrophysics Data System (ADS)

    Kramer, R.; Cassola, V. F.; Andrade, M. E. A.; de Araújo, M. W. C.; Brenner, D. J.; Khoury, H. J.

    2017-02-01

    The purpose of bowtie filters in CT scanners is to homogenize the x-ray intensity measured by the detectors in order to improve the image quality and at the same time to reduce the dose to the patient because of the preferential filtering near the periphery of the fan beam. For CT dosimetry, especially for Monte Carlo calculations of organ and tissue absorbed doses to patients, it is important to take the effect of bowtie filters into account. However, material composition and dimensions of these filters are proprietary. Consequently, a method for bowtie filter simulation independent of access to proprietary data and/or to a specific scanner would be of interest to many researchers involved in CT dosimetry. This study presents such a method based on the weighted computer tomography dose index, CTDIw, defined in two cylindrical PMMA phantoms of 16 cm and 32 cm diameter. With an EGSnrc-based Monte Carlo (MC) code, ratios CTDIw/CTDI100,a were calculated for a specific CT scanner using PMMA bowtie filter models based on sigmoid Boltzmann functions combined with a scanner filter factor (SFF) which is modified during calculations until the calculated MC CTDIw/CTDI100,a matches ratios CTDIw/CTDI100,a, determined by measurements or found in publications for that specific scanner. Once the scanner-specific value for an SFF has been found, the bowtie filter algorithm can be used in any MC code to perform CT dosimetry for that specific scanner. The bowtie filter model proposed here was validated for CTDIw/CTDI100,a considering 11 different CT scanners and for CTDI100,c, CTDI100,p and their ratio considering 4 different CT scanners. Additionally, comparisons were made for lateral dose profiles free in air and using computational anthropomorphic phantoms. CTDIw/CTDI100,a determined with this new method agreed on average within 0.89% (max. 3.4%) and 1.64% (max. 4.5%) with corresponding data published by CTDosimetry (www.impactscan.org) for the CTDI HEAD and BODY phantoms, respectively. Comparison with results calculated using proprietary data for the PHILIPS Brilliance 64 scanner showed agreement on average within 2.5% (max. 5.8%) and with data measured for that scanner within 2.1% (max. 3.7%). Ratios of CTDI100,c/CTDI100, p for this study and corresponding data published by CTDosimetry (www.impactscan.org) agree on average within about 11% (max. 28.6%). Lateral dose profiles calculated with the proposed bowtie filter and with proprietary data agreed within 2% (max. 5.9%), and both calculated data agreed within 5.4% (max. 11.2%) with measured results. Application of the proposed bowtie filter and of the exactly modelled filter to human phantom Monte Carlo calculations show agreement on the average within less than 5% (max. 7.9%) for organ and tissue absorbed doses.

  17. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters

    NASA Astrophysics Data System (ADS)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-01

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  18. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    PubMed

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  19. Development of Monte Carlo simulations to provide scanner-specific organ dose coefficients for contemporary CT

    NASA Astrophysics Data System (ADS)

    Jansen, Jan T. M.; Shrimpton, Paul C.

    2016-07-01

    The ImPACT (imaging performance assessment of CT scanners) CT patient dosimetry calculator is still used world-wide to estimate organ and effective doses (E) for computed tomography (CT) examinations, although the tool is based on Monte Carlo calculations reflecting practice in the early 1990’s. Subsequent developments in CT scanners, definitions of E, anthropomorphic phantoms, computers and radiation transport codes, have all fuelled an urgent need for updated organ dose conversion factors for contemporary CT. A new system for such simulations has been developed and satisfactorily tested. Benchmark comparisons of normalised organ doses presently derived for three old scanners (General Electric 9800, Philips Tomoscan LX and Siemens Somatom DRH) are within 5% of published values. Moreover, calculated normalised values of CT Dose Index for these scanners are in reasonable agreement (within measurement and computational uncertainties of  ±6% and  ±1%, respectively) with reported standard measurements. Organ dose coefficients calculated for a contemporary CT scanner (Siemens Somatom Sensation 16) demonstrate potential deviations by up to around 30% from the surrogate values presently assumed (through a scanner matching process) when using the ImPACT CT Dosimetry tool for newer scanners. Also, illustrative estimates of E for some typical examinations and a range of anthropomorphic phantoms demonstrate the significant differences (by some 10’s of percent) that can arise when changing from the previously adopted stylised mathematical phantom to the voxel phantoms presently recommended by the International Commission on Radiological Protection (ICRP), and when following the 2007 ICRP recommendations (updated from 1990) concerning tissue weighting factors. Further simulations with the validated dosimetry system will provide updated series of dose coefficients for a wide range of contemporary scanners.

  20. Application of a color scanner for 60Co high dose rate brachytherapy dosimetry with EBT radiochromic film

    PubMed Central

    Ghorbani, Mahdi; Toossi, Mohammad Taghi Bahreyni; Mowlavi, Ali Asghar; Roodi, Shahram Bayani; Meigooni, Ali Soleimani

    2012-01-01

    Background. The aim of this study is to evaluate the performance of a color scanner as a radiochromic film reader in two dimensional dosimetry around a high dose rate brachytherapy source. Materials and methods A Microtek ScanMaker 1000XL film scanner was utilized for the measurement of dose distribution around a high dose rate GZP6 60Co brachytherapy source with GafChromic® EBT radiochromic films. In these investigations, the non-uniformity of the film and scanner response, combined, as well as the films sensitivity to scanner’s light source was evaluated using multiple samples of films, prior to the source dosimetry. The results of these measurements were compared with the Monte Carlo simulated data using MCNPX code. In addition, isodose curves acquired by radiochromic films and Monte Carlo simulation were compared with those provided by the GZP6 treatment planning system. Results Scanning of samples of uniformly irradiated films demonstrated approximately 2.85% and 4.97% nonuniformity of the response, respectively in the longitudinal and transverse directions of the film. Our findings have also indicated that the film response is not affected by the exposure to the scanner’s light source, particularly in multiple scanning of film. The results of radiochromic film measurements are in good agreement with the Monte Carlo calculations (4%) and the corresponding dose values presented by the GZP6 treatment planning system (5%). Conclusions The results of these investigations indicate that the Microtek ScanMaker 1000XL color scanner in conjunction with GafChromic EBT film is a reliable system for dosimetric evaluation of a high dose rate brachytherapy source. PMID:23411947

  1. Development of a transmission alpha particle dosimetry technique using A549 cells and a Ra-223 source for targeted alpha therapy.

    PubMed

    Al Darwish, R; Staudacher, A H; Li, Y; Brown, M P; Bezak, E

    2016-11-01

    In targeted radionuclide therapy, regional tumors are targeted with radionuclides delivering therapeutic radiation doses. Targeted alpha therapy (TAT) is of particular interest due to its ability to deliver alpha particles of high linear energy transfer within the confines of the tumor. However, there is a lack of data related to alpha particle distribution in TAT. These data are required to more accurately estimate the absorbed dose on a cellular level. As a result, there is a need for a dosimeter that can estimate, or better yet determine the absorbed dose deposited by alpha particles in cells. In this study, as an initial step, the authors present a transmission dosimetry design for alpha particles using A549 lung carcinoma cells, an external alpha particle emitting source (radium 223; Ra-223) and a Timepix pixelated semiconductor detector. The dose delivery to the A549 lung carcinoma cell line from a Ra-223 source, considered to be an attractive radionuclide for alpha therapy, was investigated in the current work. A549 cells were either unirradiated (control) or irradiated for 12, 1, 2, or 3 h with alpha particles emitted from a Ra-223 source positioned below a monolayer of A549 cells. The Timepix detector was used to determine the number of transmitted alpha particles passing through the A549 cells and DNA double strand breaks (DSBs) in the form of γ-H2AX foci were examined by fluorescence microscopy. The number of transmitted alpha particles was correlated with the observed DNA DSBs and the delivered radiation dose was estimated. Additionally, the dose deposited was calculated using Monte Carlo code SRIM. Approximately 20% of alpha particles were transmitted and detected by Timepix. The frequency and number of γ-H2AX foci increased significantly following alpha particle irradiation as compared to unirradiated controls. The equivalent dose delivered to A549 cells was estimated to be approximately 0.66, 1.32, 2.53, and 3.96 Gy after 12, 1, 2, and 3 h irradiation, respectively, considering a relative biological effectiveness of alpha particles of 5.5. The study confirmed that the Timepix detector can be used for transmission alpha particle dosimetry. If cross-calibrated using biological dosimetry, this method will give a good indication of the biological effects of alpha particles without the need for repeated biological dosimetry which is costly, time consuming, and not readily available.

  2. WE-B-207-00: CT Lung Cancer Screening Part 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    The US National Lung Screening Trial (NLST) was a multi-center randomized, controlled trial comparing a low-dose CT (LDCT) to posterior-anterior (PA) chest x-ray (CXR) in screening older, current and former heavy smokers for early detection of lung cancer. Recruitment was launched in September 2002 and ended in April 2004 when 53,454 participants had been randomized at 33 screening sites in equal proportions. Funded by the National Cancer Institute this trial demonstrated that LDCT screening reduced lung cancer mortality. The US Preventive Services Task Force (USPSTF) cited NLST findings and conclusions in its deliberations and analysis of lung cancer screening. Undermore » the 2010 Patient Protection and Affordable Care Act, the USPSTF favorable recommendation regarding lung cancer CT screening assisted in obtaining third-party payers coverage for screening. The objective of this session is to provide an introduction to the NLST and the trial findings, in addition to a comprehensive review of the dosimetry investigations and assessments completed using individual NLST participant CT and CXR examinations. Session presentations will review and discuss the findings of two independent assessments, a CXR assessment and the findings of a CT investigation calculating individual organ dosimetry values. The CXR assessment reviewed a total of 73,733 chest x-ray exams that were performed on 92 chest imaging systems of which 66,157 participant examinations were used. The CT organ dosimetry investigation collected scan parameters from 23,773 CT examinations; a subset of the 75,133 CT examinations performed using 97 multi-detector CT scanners. Organ dose conversion coefficients were calculated using a Monte Carlo code. An experimentally-validated CT scanner simulation was coupled with 193 adult hybrid computational phantoms representing the height and weight of the current U.S. population. The dose to selected organs was calculated using the organ dose library and the abstracted scan parameters. This session will review the results and summarize the individualized doses to major organs and the mean effective dose and CTDIvol estimate for 66,157 PA chest and 23,773 CT examinations respectively, using size-dependent computational phantoms coupled with Monte Carlo calculations. Learning Objectives: Review and summarize relevant NLST findings and conclusions. Understand the scope and scale of the NLST specific to participant dosimetry. Provide a comprehensive review of NLST participant dosimetry assessments. Summarize the results of an investigation providing individualized organ dose estimates for NLST participant cohorts.« less

  3. Exploring Model Assumptions Through Three Dimensional Mixing Simulations Using a High-order Hydro Option in the Ares Code

    NASA Astrophysics Data System (ADS)

    White, Justin; Olson, Britton; Morgan, Brandon; McFarland, Jacob; Lawrence Livermore National Laboratory Team; University of Missouri-Columbia Team

    2015-11-01

    This work presents results from a large eddy simulation of a high Reynolds number Rayleigh-Taylor instability and Richtmyer-Meshkov instability. A tenth-order compact differencing scheme on a fixed Eulerian mesh is utilized within the Ares code developed at Lawrence Livermore National Laboratory. (LLNL) We explore the self-similar limit of the mixing layer growth in order to evaluate the k-L-a Reynolds Averaged Navier Stokes (RANS) model (Morgan and Wickett, Phys. Rev. E, 2015). Furthermore, profiles of turbulent kinetic energy, turbulent length scale, mass flux velocity, and density-specific-volume correlation are extracted in order to aid the creation a high fidelity LES data set for RANS modeling. Prepared by LLNL under Contract DE-AC52-07NA27344.

  4. A neutron spectrum unfolding computer code based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2014-02-01

    The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in the HTML format. NSDann unfolding code is freely available, upon request to the authors.

  5. MO-B-BRB-04: 3D Dosimetry in End-To-End Dosimetry QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibbott, G.

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  6. Macroscopic to Microscopic Scales of Particulate Dosimetry: From Source to Fate in the Body

    EPA Science Inventory

    Additional perspective with regards to particle dosimetry is achieved by exploring dosimetry across a range of scales from macroscopic to microscopic in scope. Typically, one thinks of dosimetry as what happens when a particle is inhaled, where it is deposited, and how it is clea...

  7. Tests of dynamic Lagrangian eddy viscosity models in Large Eddy Simulations of flow over three-dimensional bluff bodies

    NASA Astrophysics Data System (ADS)

    Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B.

    2004-11-01

    Large Eddy Simulations (LES) of atmospheric boundary-layer air movement in urban environments are especially challenging due to complex ground topography. Typically in such applications, fairly coarse grids must be used where the subgrid-scale (SGS) model is expected to play a crucial role. A LES code using pseudo-spectral discretization in horizontal planes and second-order differencing in the vertical is implemented in conjunction with the immersed boundary method to incorporate complex ground topography, with the classic equilibrium log-law boundary condition in the new-wall region, and with several versions of the eddy-viscosity model: (1) the constant-coefficient Smagorinsky model, (2) the dynamic, scale-invariant Lagrangian model, and (3) the dynamic, scale-dependent Lagrangian model. Other planar-averaged type dynamic models are not suitable because spatial averaging is not possible without directions of statistical homogeneity. These SGS models are tested in LES of flow around a square cylinder and of flow over surface-mounted cubes. Effects on the mean flow are documented and found not to be major. Dynamic Lagrangian models give a physically more realistic SGS viscosity field, and in general, the scale-dependent Lagrangian model produces larger Smagorinsky coefficient than the scale-invariant one, leading to reduced distributions of resolved rms velocities especially in the boundary layers near the bluff bodies.

  8. Hot air impingement on a flat plate using Large Eddy Simulation (LES) technique

    NASA Astrophysics Data System (ADS)

    Plengsa-ard, C.; Kaewbumrung, M.

    2018-01-01

    Impinging hot gas jets to a flat plate generate very high heat transfer coefficients in the impingement zone. The magnitude of heat transfer prediction near the stagnation point is important and accurate heat flux distribution are needed. This research studies on heat transfer and flow field resulting from a single hot air impinging wall. The simulation is carried out using computational fluid dynamics (CFD) commercial code FLUENT. Large Eddy Simulation (LES) approach with a subgrid-scale Smagorinsky-Lilly model is present. The classical Werner-Wengle wall model is used to compute the predicted results of velocity and temperature near walls. The Smagorinsky constant in the turbulence model is set to 0.1 and is kept constant throughout the investigation. The hot gas jet impingement on the flat plate with a constant surface temperature is chosen to validate the predicted heat flux results with experimental data. The jet Reynolds number is equal to 20,000 and a fixed jet-to-plate spacing of H/D = 2.0. Nusselt number on the impingement surface is calculated. As predicted by the wall model, the instantaneous computed Nusselt number agree fairly well with experimental data. The largest values of calculated Nusselt number are near the stagnation point and decrease monotonically in the wall jet region. Also, the contour plots of instantaneous values of wall heat flux on a flat plate are captured by LES simulation.

  9. Outer layer effects in wind-farm boundary layers: Coriolis forces and boundary layer height

    NASA Astrophysics Data System (ADS)

    Allaerts, Dries; Meyers, Johan

    2015-11-01

    In LES studies of wind-farm boundary layers, scale separation between the inner and outer region of the atmospheric boundary layer (ABL) is frequently assumed, i.e., wind turbines are presumed to fall within the inner layer and are not affected by outer layer effects. However, modern wind turbine and wind farm design tends towards larger rotor diameters and farm sizes, which means that outer layer effects will become more important. In a prior study, it was already shown for fully-developed wind farms that the ABL height influences the power performance. In this study, we use the in-house LES code SP-Wind to investigate the importance of outer layer effects on wind-farm boundary layers. In a suite of LES cases, the ABL height is varied by imposing a capping inversion with varying inversion strengths. Results indicate the growth of an internal boundary layer (IBL), which is limited in cases with low inversion layers. We further find that flow deceleration combined with Coriolis effects causes a change in wind direction throughout the farm. This effect increases with decreasing boundary layer height, and can result in considerable turbine wake deflection near the end of the farm. The authors are supported by the ERC (ActiveWindFarms, grant no: 306471). Computations were performed on VSC infrastructiure (Flemish Supercomputer Center), funded by the Hercules Foundation and the Flemish Government-department EWI.

  10. Self-similarity of a Rayleigh–Taylor mixing layer at low Atwood number with a multimode initial perturbation

    DOE PAGES

    Morgan, B. E.; Olson, B. J.; White, J. E.; ...

    2017-06-29

    High-fidelity large eddy simulation (LES) of a low-Atwood number (A = 0.05) Rayleigh-Taylor mixing layer is performed using the tenth-order compact difference code Miranda. An initial multimode perturbation spectrum is specified in Fourier space as a function of mesh resolution such that a database of results is obtained in which each successive level of increased grid resolution corresponds approximately to one additional doubling of the mixing layer width, or generation. The database is then analyzed to determine approximate requirements for self-similarity, and a new metric is proposed to quantify how far a given simulation is from the limit of self-similarity.more » It is determined that mixing layer growth reaches a high degree of self-similarity after approximately 4.5 generations. Statistical convergence errors and boundary effects at late time, however, make it impossible to draw similar conclusions regarding the self-similar growth of more sensitive turbulence parameters. Finally, self-similar turbulence profiles from the LES database are compared with one-dimensional simulations using the k-L-a and BHR-2 Reynolds-averaged Navier-Stokes (RANS) models. The k-L-a model, which is calibrated to reproduce a quadratic turbulence kinetic energy profile for a self-similar mixing layer, is found to be in better agreement with the LES than BHR-2 results.« less

  11. Basie Instrumentation of a Low Speed Axial Compressor

    NASA Astrophysics Data System (ADS)

    Blidi, Sami; Miton, Hubert

    1995-07-01

    The flow modelling depending on test results allows a best aerodynamic comprehension. For this reason, a test bed of L.E.M.F.I.'s axial compressor has been set and instrumented for a detailed exploration of the flow in a four-stage turbomachine characterised by a little spacing between blade rows. In this paper, first are given brief descriptions of the geometrical characteristics of this compressor, the test bed's control system operation and the instrumentation set. Next, measures for the exploration of flow are discussed. Finally, typical results concerning the global and local performance measurements and their analysis are presented. This work permitted to instrument the L.E.M.F.I.'s four-stage axial compressor test bed and obtain the flow steady and unsteady characteristics using the five-hole and hot film probes. Les études de modélisation de l'écoulement dans les compresseurs axiaux doivent s'appuyer sur des résultats expérimentaux permettant une meilleure compréhension des phénomènes aérodynamiques et convenant à la validation des codes numériques. À cette fin, un banc d'essais de compresseur axial basse vitesse a été rais au point et instrumenté au L.E.M.F.I.[1] pour des explorations détaillées de l'écoulement dans une machine comportant 4étages, caractérisée par le faible espacement entre les rangées d'aubages. Dans cet article, nous décrivons brièvement les caractéristiques géométriques du compresseur axial, les systèmes de contrôle du fonctionnement du banc d'essais et l'instrumentation mise au point. Nous parlons ensuite des moyens d'explorations de l'écoulement utilisés. Nous fournissons enfin des exemples de résultats de mesure des performances globales et locales du compresseur et analysons brièvement ces résultats. Le travail effectué a permis de mettre au point et d'instrumenter un banc d'essais de compresseur axial basse vitesse au L.E.M.F.I. et rendu possible la détermination des grandeurs stationnaires et instationnaires de l'écoulement entre les rangées d'aubages en utilisant des sondes clinométriques de pression et des sondes à films chauds.

  12. [A comparative study of coding and information systems for the evaluation of medical and social conditions: the case of addictive disorders].

    PubMed

    Bourdais-Mannone, Claire; Cherikh, Faredj; Gicquel, Nathalie; Gelsi, Eve; Jove, Frédérique; Staccini, Pascal

    2011-01-01

    The purpose of this study was to conduct a descriptive and comparative analysis of the tools used by healthcare professionals specializing in addictive disorders to promote a rapprochement of information systems. The evaluation guide used to assess the compensation needs of disabled persons treated in "Maisons Départementales des Personnes Handicapées" (centres for disabled people) organizes information in different areas, including a psychological component. The guide includes social and environmental information in the "Recueil Commun sur les Addictions et les Prises en charges" (Joint Report on Drug Addiction and Drug Treatment). While the program for the medicalization of information systems includes care data, the current information about social situations remains inadequate. The international classification of diseases provides synthetic diagnostic codes to describe substance use, etiologic factors and the somatic and psychological complications inherent to addictive disorders. The current system could be radically simplified and harmonized and would benefit from adopting a more individualized approach to non-substance behavioral addictions. The international classification of disabilities provides tools for evaluating the psychological component included in the recent definition of addictive disorders. Legal information should play an integral role in the structure of the information system and in international classifications. The prevalence of episodes of care and treatment of addictive and psychological disorders was assessed at Nice University Hospital in all disciplines. Except in addiction treatment units, very few patients were found to have a RECAP file.

  13. The Latin American Biological Dosimetry Network (LBDNet).

    PubMed

    García, O; Di Giorgio, M; Radl, A; Taja, M R; Sapienza, C E; Deminge, M M; Fernández Rearte, J; Stuck Oliveira, M; Valdivia, P; Lamadrid, A I; González, J E; Romero, I; Mandina, T; Guerrero-Carbajal, C; ArceoMaldonado, C; Cortina Ramírez, G E; Espinoza, M; Martínez-López, W; Di Tomasso, M

    2016-09-01

    Biological Dosimetry is a necessary support for national radiation protection programmes and emergency response schemes. The Latin American Biological Dosimetry Network (LBDNet) was formally founded in 2007 to provide early biological dosimetry assistance in case of radiation emergencies in the Latin American Region. Here are presented the main topics considered in the foundational document of the network, which comprise: mission, partners, concept of operation, including the mechanism to request support for biological dosimetry assistance in the region, and the network capabilities. The process for network activation and the role of the coordinating laboratory during biological dosimetry emergency response is also presented. This information is preceded by historical remarks on biological dosimetry cooperation in Latin America. A summary of the main experimental and practical results already obtained by the LBDNet is also included. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Reference dosimeter system of the iaea

    NASA Astrophysics Data System (ADS)

    Mehta, Kishor; Girzikowsky, Reinhard

    1995-09-01

    Quality assurance programmes must be in operation at radiation processing facilities to satisfy national and international Standards. Since dosimetry has a vital function in these QA programmes, it is imperative that the dosimetry systems in use at these facilities are well calibrated with a traceability to a Primary Standard Dosimetry Laboratory. As a service to the Member States, the International Atomic Energy Agency operates the International Dose Assurance Service (IDAS) to assist in this process. The transfer standard dosimetry system that is used for this service is based on ESR spectrometry. The paper describes the activities undertaken at the IAEA Dosimetry Laboratory to establish the QA programme for its reference dosimetry system. There are four key elements of such a programme: quality assurance manual; calibration that is traceable to a Primary Standard Dosimetry Laboratory; a clear and detailed statement of uncertainty in the dose measurement; and, periodic quality audit.

  15. Topical Review: Polymer gel dosimetry

    PubMed Central

    Baldock, C; De Deene, Y; Doran, S; Ibbott, G; Jirasek, A; Lepage, M; McAuley, K B; Oldham, M; Schreiner, L J

    2010-01-01

    Polymer gel dosimeters are fabricated from radiation sensitive chemicals which, upon irradiation, polymerize as a function of the absorbed radiation dose. These gel dosimeters, with the capacity to uniquely record the radiation dose distribution in three-dimensions (3D), have specific advantages when compared to one-dimensional dosimeters, such as ion chambers, and two-dimensional dosimeters, such as film. These advantages are particularly significant in dosimetry situations where steep dose gradients exist such as in intensity-modulated radiation therapy (IMRT) and stereotactic radiosurgery. Polymer gel dosimeters also have specific advantages for brachytherapy dosimetry. Potential dosimetry applications include those for low-energy x-rays, high-linear energy transfer (LET) and proton therapy, radionuclide and boron capture neutron therapy dosimetries. These 3D dosimeters are radiologically soft-tissue equivalent with properties that may be modified depending on the application. The 3D radiation dose distribution in polymer gel dosimeters may be imaged using magnetic resonance imaging (MRI), optical-computerized tomography (optical-CT), x-ray CT or ultrasound. The fundamental science underpinning polymer gel dosimetry is reviewed along with the various evaluation techniques. Clinical dosimetry applications of polymer gel dosimetry are also presented. PMID:20150687

  16. SU-F-J-100: Standardized Biodistribution Template for Nuclear Medicine Dosimetry Collection and Reporting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kesner, A; Poli, G; Beykan, S

    Purpose: As the field of Nuclear Medicine moves forward with efforts to integrate radiation dosimetry into clinical practice we can identify the challenge posed by the lack of standardized dose calculation methods and protocols. All personalized internal dosimetry is derived by projecting biodistribution measurements into dosimetry calculations. In an effort to standardize organization of data and its reporting, we have developed, as a sequel to the EANM recommendation of “Good Dosimetry Reporting”, a freely available biodistribution template, which can be used to create a common point of reference for dosimetry data. It can be disseminated, interpreted, and used for methodmore » development widely across the field. Methods: A generalized biodistribution template was built in a comma delineated format (.csv) to be completed by users performing biodistribution measurements. The template is available for free download. The download site includes instructions and other usage details on the template. Results: This is a new resource developed for the community. It is our hope that users will consider integrating it into their dosimetry operations. Having biodistribution data available and easily accessible for all patients processed is a strategy for organizing large amounts of information. It may enable users to create their own databases that can be analyzed for multiple aspects of dosimetry operations. Furthermore, it enables population data to easily be reprocessed using different dosimetry methodologies. With respect to dosimetry-related research and publications, the biodistribution template can be included as supplementary material, and will allow others in the community to better compare calculations and results achieved. Conclusion: As dosimetry in nuclear medicine become more routinely applied in clinical applications, we, as a field, need to develop the infrastructure for handling large amounts of data. Our organ level biodistribution template can be used as a standard format for data collection, organization, as well as for dosimetry research and software development.« less

  17. Evaluation and implementation of triple‐channel radiochromic film dosimetry in brachytherapy

    PubMed Central

    Bradley, David; Nisbet, Andrew

    2014-01-01

    The measurement of dose distributions in clinical brachytherapy, for the purpose of quality control, commissioning or dosimetric audit, is challenging and requires development. Radiochromic film dosimetry with a commercial flatbed scanner may be suitable, but careful methodologies are required to control various sources of uncertainty. Triple‐channel dosimetry has recently been utilized in external beam radiotherapy to improve the accuracy of film dosimetry, but its use in brachytherapy, with characteristic high maximum doses, steep dose gradients, and small scales, has been less well researched. We investigate the use of advanced film dosimetry techniques for brachytherapy dosimetry, evaluating uncertainties and assessing the mitigation afforded by triple‐channel dosimetry. We present results on postirradiation film darkening, lateral scanner effect, film surface perturbation, film active layer thickness, film curling, and examples of the measurement of clinical brachytherapy dose distributions. The lateral scanner effect in brachytherapy film dosimetry can be very significant, up to 23% dose increase at 14 Gy, at ± 9 cm lateral from the scanner axis for simple single‐channel dosimetry. Triple‐channel dosimetry mitigates the effect, but still limits the useable width of a typical scanner to less than 8 cm at high dose levels to give dose uncertainty to within 1%. Triple‐channel dosimetry separates dose and dose‐independent signal components, and effectively removes disturbances caused by film thickness variation and surface perturbations in the examples considered in this work. The use of reference dose films scanned simultaneously with brachytherapy test films is recommended to account for scanner variations from calibration conditions. Postirradiation darkening, which is a continual logarithmic function with time, must be taken into account between the reference and test films. Finally, films must be flat when scanned to avoid the Callier‐like effects and to provide reliable dosimetric results. We have demonstrated that radiochromic film dosimetry with GAFCHROMIC EBT3 film and a commercial flatbed scanner is a viable method for brachytherapy dose distribution measurement, and uncertainties may be reduced with triple‐channel dosimetry and specific film scan and evaluation methodologies. PACS numbers: 87.55.Qr, 87.56.bg, 87.55.km PMID:25207417

  18. MO-B-BRB-00: Three Dimensional Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  19. MO-B-BRB-03: 3D Dosimetry in the Clinic: Validating Special Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juang, T.

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  20. MO-B-BRB-01: 3D Dosimetry in the Clinic: Background and Motivation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, L.

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  1. MO-B-BRB-02: 3D Dosimetry in the Clinic: IMRT Technique Validation in Sweden

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceberg, S.

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  2. Large-Eddy Simulation of the Flat-plate Turbulent Boundary Layer at High Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Inoue, Michio

    The near-wall, subgrid-scale (SGS) model [Chung and Pullin, "Large-eddy simulation and wall-modeling of turbulent channel flow'', J. Fluid Mech. 631, 281--309 (2009)] is used to perform large-eddy simulations (LES) of the incompressible developing, smooth-wall, flat-plate turbulent boundary layer. In this model, the stretched-vortex, SGS closure is utilized in conjunction with a tailored, near-wall model designed to incorporate anisotropic vorticity scales in the presence of the wall. The composite SGS-wall model is presently incorporated into a computer code suitable for the LES of developing flat-plate boundary layers. This is then used to study several aspects of zero- and adverse-pressure gradient turbulent boundary layers. First, LES of the zero-pressure gradient turbulent boundary layer are performed at Reynolds numbers Retheta based on the free-stream velocity and the momentum thickness in the range Retheta = 103-1012. Results include the inverse skin friction coefficient, 2/Cf , velocity profiles, the shape factor H, the Karman "constant", and the Coles wake factor as functions of Re theta. Comparisons with some direct numerical simulation (DNS) and experiment are made, including turbulent intensity data from atmospheric-layer measurements at Retheta = O (106). At extremely large Retheta , the empirical Coles-Fernholz relation for skin-friction coefficient provides a reasonable representation of the LES predictions. While the present LES methodology cannot of itself probe the structure of the near-wall region, the present results show turbulence intensities that scale on the wall-friction velocity and on the Clauser length scale over almost all of the outer boundary layer. It is argued that the LES is suggestive of the asymptotic, infinite Reynolds-number limit for the smooth-wall turbulent boundary layer and different ways in which this limit can be approached are discussed. The maximum Retheta of the present simulations appears to be limited by machine precision and it is speculated, but not demonstrated, that even larger Retheta could be achieved with quad- or higher-precision arithmetic. Second, the time series velocity signals obtained from LES within the logarithmic region of the zero-pressure gradient turbulent boundary layer are used in combination with an empirical, predictive inner--outer wall model [Marusic et al., "Predictive model for wall-bounded turbulent flow'', Science 329, 193 (2010)] to calculate the statistics of the fluctuating streamwise velocity in the inner region of the zero-pressure gradient turbulent boundary layer. Results, including spectra and moments up to fourth order, are compared with equivalent predictions using experimental time series, as well as with direct experimental measurements at Reynolds numbers Retau based on the friction velocity and the boundary layer thickness, Retau = 7,300, 13,600 and 19,000. LES combined with the wall model are then used to extend the inner-layer predictions to Reynolds numbers Retau = 62,000, 100,000 and 200,000 that lie within a gap in log(Retau) space between laboratory measurements and surface-layer, atmospheric experiments. The present results support a log-like increase in the near-wall peak of the streamwise turbulence intensities with Retau and also provide a means of extending LES results at large Reynolds numbers to the near-wall region of wall-bounded turbulent flows. Finally, we apply the wall model to LES of a turbulent boundary layer subject to an adverse pressure gradient. Computed statistics are found to be consistent with recent experiments and some Reynolds number similarity is observed over a range of two orders of magnitude.

  3. Quantitative imaging for clinical dosimetry

    NASA Astrophysics Data System (ADS)

    Bardiès, Manuel; Flux, Glenn; Lassmann, Michael; Monsieurs, Myriam; Savolainen, Sauli; Strand, Sven-Erik

    2006-12-01

    Patient-specific dosimetry in nuclear medicine is now a legal requirement in many countries throughout the EU for targeted radionuclide therapy (TRT) applications. In order to achieve that goal, an increased level of accuracy in dosimetry procedures is needed. Current research in nuclear medicine dosimetry should not only aim at developing new methods to assess the delivered radiation absorbed dose at the patient level, but also to ensure that the proposed methods can be put into practice in a sufficient number of institutions. A unified dosimetry methodology is required for making clinical outcome comparisons possible.

  4. The Impact of Manual Segmentation of CT Images on Monte Carlo Based Skeletal Dosimetry

    NASA Astrophysics Data System (ADS)

    Frederick, Steve; Jokisch, Derek; Bolch, Wesley; Shah, Amish; Brindle, Jim; Patton, Phillip; Wyler, J. S.

    2004-11-01

    Radiation doses to the skeleton from internal emitters are of importance in both protection of radiation workers and patients undergoing radionuclide therapies. Improved dose estimates involve obtaining two sets of medical images. The first image provides the macroscopic boundaries (spongiosa volume and cortical shell) of the individual skeletal sites. A second, higher resolution image of the spongiosa microstructure is also obtained. These image sets then provide the geometry for a Monte Carlo radiation transport code. Manual segmentation of the first image is required in order to provide the macrostructural data. For this study, multiple segmentations of the same CT image were performed by multiple individuals. The segmentations were then used in the transport code and the results compared in order to determine the impact of differing segmentations on the skeletal doses. This work has provided guidance on the extent of training required of the manual segmenters. (This work was supported by a grant from the National Institute of Health.)

  5. A High-Resolution Capability for Large-Eddy Simulation of Jet Flows

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2011-01-01

    A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.

  6. Relationship between student selection criteria and learner success for medical dosimetry students

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Jamie, E-mail: jabaker@mdanderson.org; Tucker, Debra; Raynes, Edilberto

    Medical dosimetry education occupies a specialized branch of allied health higher education. Noted international shortages of health care workers, reduced university funding, limitations on faculty staffing, trends in learner attrition, and increased enrollment of nontraditional students force medical dosimetry educational leadership to reevaluate current admission practices. Program officials wish to select medical dosimetry students with the best chances of successful graduation. The purpose of the quantitative ex post facto correlation study was to investigate the relationship between applicant characteristics (cumulative undergraduate grade point average (GPA), science grade point average (SGPA), prior experience as a radiation therapist, and previous academic degrees)more » and the successful completion of a medical dosimetry program, as measured by graduation. A key finding from the quantitative study was the statistically significant positive correlation between a student's previous degree and his or her successful graduation from the medical dosimetry program. Future research investigations could include a larger research sample, representative of more medical dosimetry student populations, and additional studies concerning the relationship of previous work as a radiation therapist and the effect on success as a medical dosimetry student. Based on the quantitative correlation analysis, medical dosimetry leadership on admissions committees could revise student selection rubrics to place less emphasis on an applicant's undergraduate cumulative GPA and increase the weight assigned to previous degrees.« less

  7. Relationship between student selection criteria and learner success for medical dosimetry students.

    PubMed

    Baker, Jamie; Tucker, Debra; Raynes, Edilberto; Aitken, Florence; Allen, Pamela

    2016-01-01

    Medical dosimetry education occupies a specialized branch of allied health higher education. Noted international shortages of health care workers, reduced university funding, limitations on faculty staffing, trends in learner attrition, and increased enrollment of nontraditional students force medical dosimetry educational leadership to reevaluate current admission practices. Program officials wish to select medical dosimetry students with the best chances of successful graduation. The purpose of the quantitative ex post facto correlation study was to investigate the relationship between applicant characteristics (cumulative undergraduate grade point average (GPA), science grade point average (SGPA), prior experience as a radiation therapist, and previous academic degrees) and the successful completion of a medical dosimetry program, as measured by graduation. A key finding from the quantitative study was the statistically significant positive correlation between a student׳s previous degree and his or her successful graduation from the medical dosimetry program. Future research investigations could include a larger research sample, representative of more medical dosimetry student populations, and additional studies concerning the relationship of previous work as a radiation therapist and the effect on success as a medical dosimetry student. Based on the quantitative correlation analysis, medical dosimetry leadership on admissions committees could revise student selection rubrics to place less emphasis on an applicant׳s undergraduate cumulative GPA and increase the weight assigned to previous degrees. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  8. Properties of Principal TL (Thermoluminescence) Dosimeters.

    DTIC Science & Technology

    1983-10-01

    thermoluminescence dosimetry ( TLD ) emerged as the preferred means because of convenience of batch evaluation, reusability, large detection range, linearity and...personnel dosimetry , thermoluminescence dosimetry has emerged as a superior technique due to its manifold advantages over other methods of dose...their suitability for dosimetry . A brief description of important TL materials and their properties is documented in this report. DD ,JN 1473 EDITION 0

  9. AFRRI Neutron Dosimetry and Radiobiology Conference

    DTIC Science & Technology

    1988-11-09

    Neutron Dosimetry and Radiobiology 8 - 9 November 1988 Sponsored by Defense Nuclear Agency ARMED FORCES RADIOBIOLOGY RESEARCH INSTITUTE...neutron radiation is less amenable to amelioration by chemical radioprotectants and more difficult to assess by means of physical dosimetry . These...neutron dosimetry and radiobiology we have witnessed in the past several years,could not have been possible without the sustained efforts of many

  10. Sixth international radiopharmaceutical dosimetry symposium: Proceedings. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.-Stelson, A.T.; Stabin, M.G.; Sparks, R.B.

    1999-01-01

    This conference was held May 7--10 in Gatlinburg, Tennessee. The purpose of this conference was to provide a multidisciplinary forum for exchange of state-of-the-art information on radiopharmaceutical dosimetry. Attention is focused on the following: quantitative analysis and treatment planning; cellular and small-scale dosimetry; dosimetric models; radiopharmaceutical kinetics and dosimetry; and animal models, extrapolation, and uncertainty.

  11. Thin film tritium dosimetry

    DOEpatents

    Moran, Paul R.

    1976-01-01

    The present invention provides a method for tritium dosimetry. A dosimeter comprising a thin film of a material having relatively sensitive RITAC-RITAP dosimetry properties is exposed to radiation from tritium, and after the dosimeter has been removed from the source of the radiation, the low energy electron dose deposited in the thin film is determined by radiation-induced, thermally-activated polarization dosimetry techniques.

  12. PREFACE: Third International Conference on Radiotherapy Gel Dosimetry

    NASA Astrophysics Data System (ADS)

    DeDeene, Yves; Baldock, Clive

    2004-01-01

    Gel dosimetry is not merely another dosimetry technique. Gel dosimeters are integrating dosimeters that enable dose verification in three dimensions. The application of a 3D dosimetry technique in the clinic would give a real push to the implementation of advanced high-precision radiotherapy technologies in many institutes. It can be expected that with the recent developments in the field towards more user-friendly gel systems and imaging modalities, gel dosimetry will become a vital link in the chain of high-precision radiation cancer therapy in the near future. Many researchers all over the world have contributed to the emerging technology of gel dosimetry. The research field of gel dosimetry is recognized to be very broad from polymer and analytical chemistry and material research to imaging technologies. The DOSGEL conferences in the past have proven to be an important forum at which material scientists, chemists, medical physicists, magnetic resonance imaging and radiation specialists brought together a critical mass of thoughts, findings and considerations. DOSGEL 2004 has been endorsed by many international, supra-national and national medical physics organizations and publishers. These proceedings contain 51 papers that cover various aspects of gel dosimetry.

  13. Test Analysis Tools to Ensure Higher Quality of On-Board Real Time Software for Space Applications

    NASA Astrophysics Data System (ADS)

    Boudillet, O.; Mescam, J.-C.; Dalemagne, D.

    2008-08-01

    EADS Astrium Space Transportation, in its Les Mureaux premises, is responsible for the French M51 nuclear deterrent missile onboard SW. There was also developed over 1 million of line of code, mostly in ADA, for the Automated Transfer Vehicle (ATV) onboard SW and the flight control SW of the ARIANE5 launcher which has put it into orbit. As part of the ATV SW, ASTRIUM ST has developed the first Category A SW ever qualified for a European space application. To ensure that all these embedded SW have been developed with the highest quality and reliability level, specific development tools have been designed to cover the steps of source code verification, automated validation test or complete target instruction coverage verification. Three of such dedicated tools are presented here.

  14. Technical considerations for implementation of x-ray CT polymer gel dosimetry.

    PubMed

    Hilts, M; Jirasek, A; Duzenli, C

    2005-04-21

    Gel dosimetry is the most promising 3D dosimetry technique in current radiation therapy practice. X-ray CT has been shown to be a feasible method of reading out polymer gel dosimeters and, with the high accessibility of CT scanners to cancer hospitals, presents an exciting possibility for clinical implementation of gel dosimetry. In this study we report on technical considerations for implementation of x-ray CT polymer gel dosimetry. Specifically phantom design, CT imaging methods, imaging time requirements and gel dose response are investigated. Where possible, recommendations are made for optimizing parameters to enhance system performance. The dose resolution achievable with an optimized system is calculated given voxel size and imaging time constraints. Results are compared with MRI and optical CT polymer gel dosimetry results available in the literature.

  15. Image processing for IMRT QA dosimetry.

    PubMed

    Zaini, Mehran R; Forest, Gary J; Loshek, David D

    2005-01-01

    We have automated the determination of the placement location of the dosimetry ion chamber within intensity-modulated radiotherapy (IMRT) fields, as part of streamlining the entire IMRT quality assurance process. This paper describes the mathematical image-processing techniques to arrive at the appropriate measurement locations within the planar dose maps of the IMRT fields. A specific spot within the found region is identified based on its flatness, radiation magnitude, location, area, and the avoidance of the interleaf spaces. The techniques used include applying a Laplacian, dilation, erosion, region identification, and measurement point selection based on three parameters: the size of the erosion operator, the gradient, and the importance of the area of a region versus its magnitude. These three parameters are adjustable by the user. However, the first one requires tweaking in extremely rare occasions, the gradient requires rare adjustments, and the last parameter needs occasional fine-tuning. This algorithm has been tested in over 50 cases. In about 5% of cases, the algorithm does not find a measurement point due to the extremely steep and narrow regions within the fluence maps. In such cases, manual selection of a point is allowed by our code, which is also difficult to ascertain, since the fluence map does not yield itself to an appropriate measurement point selection.

  16. Computer Aided Dosimetry and Verification of Exposure to Radiation

    DTIC Science & Technology

    2002-06-01

    Event matrix 2. Hematopoietic * Absolute blood counts * Relative blood counts 3. Dosimetry * TLD * EPDQuantitative * Radiation survey * Whole body...EI1 Defence Research and Recherche et developpement Development Canada pour la d6fense Canada DEFENCE •mI•DEFENSE Computer Aided Dosimetry and...Aided Dosimetry and Verification of Exposure to Radiation Edward Waller SAIC Canada Robert Z Stodilka Radiation Effects Group, Space Systems and

  17. Design and Calibration of a X-Ray Millibeam

    DTIC Science & Technology

    2005-12-01

    developed for use in Fricke dosimetry , parallel-plate ionization chambers, Lithium Fluoride thermoluminescent dosimetry ( TLD ), and EBT GafChromic...thermoluminescent dosimetry ( TLD ), and EBT GafChromic film to characterize the spatial distribution and accuracy of the doses produced by the Faxitron. A...absorbed dose calibration factors for use in Fricke dosimetry , parallel-plate ionization chambers, Lithium Fluoride (LiF) TLD , and EBT GafChromic film. The

  18. Modeling Cell and Tumor-Metastasis Dosimetry with the Particle and Heavy Ion Transport Code System (PHITS) Software for Targeted Alpha-Particle Radionuclide Therapy.

    PubMed

    Lee, Dongyoul; Li, Mengshi; Bednarz, Bryan; Schultz, Michael K

    2018-06-26

    The use of targeted radionuclide therapy for cancer is on the rise. While beta-particle-emitting radionuclides have been extensively explored for targeted radionuclide therapy, alpha-particle-emitting radionuclides are emerging as effective alternatives. In this context, fundamental understanding of the interactions and dosimetry of these emitted particles with cells in the tumor microenvironment is critical to ascertaining the potential of alpha-particle-emitting radionuclides. One important parameter that can be used to assess these metrics is the S-value. In this study, we characterized several alpha-particle-emitting radionuclides (and their associated radionuclide progeny) regarding S-values in the cellular and tumor-metastasis environments. The Particle and Heavy Ion Transport code System (PHITS) was used to obtain S-values via Monte Carlo simulation for cell and tumor metastasis resulting from interactions with the alpha-particle-emitting radionuclides, lead-212 ( 212 Pb), actinium-225 ( 225 Ac) and bismuth-213 ( 213 Bi); these values were compared to the beta-particle-emitting radionuclides yttrium-90 ( 90 Y) and lutetium-177 ( 177 Lu) and an Auger-electron-emitting radionuclide indium-111 ( 111 In). The effect of cellular internalization on S-value was explored at increasing degree of internalization for each radionuclide. This aspect of S-value determination was further explored in a cell line-specific fashion for six different cancer cell lines based on the cell dimensions obtained by confocal microscopy. S-values from PHITS were in good agreement with MIRDcell S-values (cellular S-values) and the values found by Hindié et al. (tumor S-values). In the cellular model, 212 Pb and 213 Bi decay series produced S-values that were 50- to 120-fold higher than 177 Lu, while 225 Ac decay series analysis suggested S-values that were 240- to 520-fold higher than 177 Lu. S-values arising with 100% cellular internalization were two- to sixfold higher for the nucleus when compared to 0% internalization. The tumor dosimetry model defines the relative merit of radionuclides and suggests alpha particles may be effective for large tumors as well as small tumor metastases. These results from PHITS modeling substantiate emerging evidence that alpha-particle-emitting radionuclides may be an effective alternative to beta-particle-emitting radionuclides for targeted radionuclide therapy due to preferred dose-deposition profiles in the cellular and tumor metastasis context. These results further suggest that internalization of alpha-particle-emitting radionuclides via radiolabeled ligands may increase the relative biological effectiveness of radiotherapeutics.

  19. MO-A-BRB-01: TG191: Clinical Use of Luminescent Dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S.

    This presentation will highlight the upcoming TG-191 report: Clinical Use of Luminescent Dosimeters. Luminescent dosimetry based on TLD and OSLD is a practical, accurate, and precise technique for point dosimetry in medical physics applications. The charges of Task Group 191 were to detail the methodologies for practical and optimal luminescent dosimetry in a clinical setting. This includes (1) To review the variety of TLD/OSL materials available, including features and limitations of each. (2) To outline the optimal steps to achieve accurate and precise dosimetry with luminescent detectors and to evaluate the uncertainty induced when less rigorous procedures are used. (3)more » To develop consensus guidelines on the optimal use of luminescent dosimeters for clinical practice. (4) To develop guidelines for special medically relevant uses of TLDs/OSLs (e.g., mixed field i.e. photon/neutron dosimetry, particle beam dosimetry, skin dosimetry). While this report provides general guidelines for arbitrary TLD and OSLD processes, the report, and therefore this presentation, provide specific guidance for TLD-100 (LiF:Ti,Mg) and nanoDot (Al2O3:C) dosimeters because of their prevalence in clinical practice. Learning Objectives: Understand the available dosimetry systems, and basic theory of their operation Understand the range of dose determination methodologies and the uncertainties associated with them Become familiar with special considerations for TLD/OSLD relevant for special clinical situations Learn recommended commissioning and QA procedures for these dosimetry systems.« less

  20. MO-A-BRB-00: TG191: Clinical Use of Luminescent Dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This presentation will highlight the upcoming TG-191 report: Clinical Use of Luminescent Dosimeters. Luminescent dosimetry based on TLD and OSLD is a practical, accurate, and precise technique for point dosimetry in medical physics applications. The charges of Task Group 191 were to detail the methodologies for practical and optimal luminescent dosimetry in a clinical setting. This includes (1) To review the variety of TLD/OSL materials available, including features and limitations of each. (2) To outline the optimal steps to achieve accurate and precise dosimetry with luminescent detectors and to evaluate the uncertainty induced when less rigorous procedures are used. (3)more » To develop consensus guidelines on the optimal use of luminescent dosimeters for clinical practice. (4) To develop guidelines for special medically relevant uses of TLDs/OSLs (e.g., mixed field i.e. photon/neutron dosimetry, particle beam dosimetry, skin dosimetry). While this report provides general guidelines for arbitrary TLD and OSLD processes, the report, and therefore this presentation, provide specific guidance for TLD-100 (LiF:Ti,Mg) and nanoDot (Al2O3:C) dosimeters because of their prevalence in clinical practice. Learning Objectives: Understand the available dosimetry systems, and basic theory of their operation Understand the range of dose determination methodologies and the uncertainties associated with them Become familiar with special considerations for TLD/OSLD relevant for special clinical situations Learn recommended commissioning and QA procedures for these dosimetry systems.« less

  1. Assessment of national dosimetry quality audits results for teletherapy machines from 1989 to 2015.

    PubMed

    Muhammad, Wazir; Ullah, Asad; Mahmood, Khalid; Matiullah

    2016-01-01

    The purpose of this study was to ensure accuracy in radiation dose delivery, external dosimetry quality audit has an equal importance with routine dosimetry performed at clinics. To do so, dosimetry quality audit was organized by the Secondary Standard Dosimetry Laboratory (SSDL) of Pakistan Institute of Nuclear Science and Technology (PINSTECH) at the national level to investigate and minimize uncertainties involved in the measurement of absorbed dose, and to improve the accuracy of dose measurement at different radiotherapy hospitals. A total of 181 dosimetry quality audits (i.e., 102 of Co-60 and 79 of linear accelerators) for teletherapy units installed at 22 different sites were performed from 1989 to 2015. The percent deviation between users’ calculated/stated dose and evaluated dose (in the result of on-site dosimetry visits) were calculated and the results were analyzed with respect to the limits of ± 2.5% (ICRU "optimal model") ± 3.0% (IAEA on-site dosimetry visits limit) and ± 5.0% (ICRU minimal or "lowest acceptable" model). The results showed that out of 181 total on-site dosimetry visits, 20.44%, 16.02%, and 4.42% were out of acceptable limits of ± 2.5% ± 3.0%, and ± 5.0%, respectively. The importance of a proper ongoing quality assurance program, recommendations of the followed protocols, and properly calibrated thermometers, pressure gauges, and humidity meters at radiotherapy hospitals are essential in maintaining consistency and uniformity of absorbed dose measurements for precision in dose delivery.

  2. Narrow beam neutron dosimetry.

    PubMed

    Ferenci, M Sutton

    2004-01-01

    Organ and effective doses have been estimated for male and female anthropomorphic mathematical models exposed to monoenergetic narrow beams of neutrons with energies from 10(-11) to 1000 MeV. Calculations were performed for anterior-posterior, posterior-anterior, left-lateral and right-lateral irradiation geometries. The beam diameter used in the calculations was 7.62 cm and the phantoms were irradiated at a height of 1 m above the ground. This geometry was chosen to simulate an accidental scenario (a worker walking through the beam) at Flight Path 30 Left (FP30L) of the Weapons Neutron Research (WNR) Facility at Los Alamos National Laboratory. The calculations were carried out using the Monte Carlo transport code MCNPX 2.5c.

  3. Neutron H*(10) estimation and measurements around 18MV linac.

    PubMed

    Cerón Ramírez, Pablo Víctor; Díaz Góngora, José Antonio Irán; Paredes Gutiérrez, Lydia Concepción; Rivera Montalvo, Teodoro; Vega Carrillo, Héctor René

    2016-11-01

    Thermoluminescent dosimetry, analytical techniques and Monte Carlo calculations were used to estimate the dose of neutron radiation in a treatment room with a linear electron accelerator of 18MV. Measurements were carried out through neutron ambient dose monitors which include pairs of thermoluminescent dosimeters TLD 600 ( 6 LiF: Mg, Ti) and TLD 700 ( 7 LiF: Mg, Ti), which were placed inside a paraffin spheres. The measurements has allowed to use NCRP 151 equations, these expressions are useful to find relevant dosimetric quantities. In addition, photoneutrons produced by linac head were calculated through MCNPX code taking into account the geometry and composition of the linac head principal parts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Digital Mammography Breast Dosimetry Using Copper-Doped Lithium Fluoride (LiF:MCP) Thermoluminescent Dosimeters (TLDs)

    DTIC Science & Technology

    2003-06-18

    Mammography Breast Dosimetry Using Copper-Doped Lithium Fluoride (LiF:MCP) Thermoluminescent Dosimeters ( TLDs ) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...34Digital Mammography Breast Dosimetry Using Copper- Doped Lithium Fluoride (LiF:MCP) Thermoluminescent Dosimeters ( TLDs )" Author: LT John J. Tomon...Title of Thesis: " Digital Mammography Breast Dosimetry Using Copper-Doped Lithium Fluoride (LiF:MCP) Thermoluminescent

  5. Development of a Multileaf Collimator for Proton Radiotherapy

    DTIC Science & Technology

    2010-06-01

    generated and compared to the dosimetry derived from radiochromic media. TLDS may be inserted into the phantom to further confirm the technique. Finally...of dosimetry systems for scanned beams: (FY 2006-2009). We are investigating dosimetry systems for use with scanned beams and will either purchase a...group Research in Monte Carlo Simulations and Dosimetry Studies of Proton Therapy Rulon Mayer, PhD Energetic protons used to damage tumors

  6. The influence of neutron contamination on dosimetry in external photon beam radiotherapy.

    PubMed

    Horst, Felix; Czarnecki, Damian; Zink, Klemens

    2015-11-01

    Photon fields with energies above ∼7 MeV are contaminated by neutrons due to photonuclear reactions. Their influence on dosimetry-although considered to be very low-is widely unexplored. In this work, Monte Carlo based investigations into this issue performed with fluka and egsnrc are presented. A typical Linac head in 18 MV-X mode was modeled equivalently within both codes. egsnrc was used for the photon and fluka for the neutron production and transport simulation. Water depth dose profiles and the response of different detectors (Farmer chamber, TLD-100, TLD-600H, and TLD-700H chip) in five representative depths were simulated and the neutrons' impact (neutron absorbed dose relative to photon absorbed dose) was calculated. To take account of the neutrons' influence, a theoretically required correction factor was defined and calculated for five representative water depths. The neutrons' impact on the absorbed dose to water was found to be below 0.1% for all depths and their impact on the response of the Farmer chamber and the TLD-700H chip was found to be even less. For the TLD-100 and the TLD-600H chip it was found to be up to 0.3% and 0.7%, respectively. The theoretical correction factors to be applied to absorbed dose to water values measured with these four detectors in a depth different from the reference/calibration depth were calculated and found to be below 0.05% for the Farmer chamber and the TLD-700H chip, but up to 0.15% and 0.35% for the TLD-100 and TLD-600H chips, respectively. In thermoluminescence dosimetry the neutrons' influence (and therefore the additional inaccuracy in measurement) was found to be higher for TLD materials whose 6Li fraction is high, such as TLD-100 and TLD-600H, resulting from the thermal neutron capture reaction on 6Li. The impact of photoneutrons on the absorbed dose to water and on the response of a typical ionization chamber as well as three different types of TLD chips was quantified and was as expected found to be very low relative to that of the primary photons. For most practical reasons the neutrons' influence on dosimetry might be neglected while for absolute precise thermoluminescence dosimetry in high energy photon fields, the use of TLD-700H (<0.03% 6Li) instead of the commonly used TLD-100 (7.4% 6Li) or even the extra neutron sensitive TLD-600H is recommended (95.6% 6Li) due to the additional inaccuracy in measurement for TLD materials with a high 6Li fraction.

  7. Large-Eddy Simulation Sensitivities to Variations of Configuration and Forcing Parameters in Canonical Boundary-Layer Flows for Wind Energy Applications

    DOE PAGES

    Mirocha, Jeffrey D.; Churchfield, Matthew J.; Munoz-Esparza, Domingo; ...

    2017-08-28

    Here, the sensitivities of idealized Large-Eddy Simulations (LES) to variations of model configuration and forcing parameters on quantities of interest to wind power applications are examined. Simulated wind speed, turbulent fluxes, spectra and cospectra are assessed in relation to variations of two physical factors, geostrophic wind speed and surface roughness length, and several model configuration choices, including mesh size and grid aspect ratio, turbulence model, and numerical discretization schemes, in three different code bases. Two case studies representing nearly steady neutral and convective atmospheric boundary layer (ABL) flow conditions over nearly flat and homogeneous terrain were used to force andmore » assess idealized LES, using periodic lateral boundary conditions. Comparison with fast-response velocity measurements at five heights within the lowest 50 m indicates that most model configurations performed similarly overall, with differences between observed and predicted wind speed generally smaller than measurement variability. Simulations of convective conditions produced turbulence quantities and spectra that matched the observations well, while those of neutral simulations produced good predictions of stress, but smaller than observed magnitudes of turbulence kinetic energy, likely due to tower wakes influencing the measurements. While sensitivities to model configuration choices and variability in forcing can be considerable, idealized LES are shown to reliably reproduce quantities of interest to wind energy applications within the lower ABL during quasi-ideal, nearly steady neutral and convective conditions over nearly flat and homogeneous terrain.« less

  8. High-resolution LES of the rotating stall in a reduced scale model pump-turbine

    NASA Astrophysics Data System (ADS)

    Pacot, Olivier; Kato, Chisachi; Avellan, François

    2014-03-01

    Extending the operating range of modern pump-turbines becomes increasingly important in the course of the integration of renewable energy sources in the existing power grid. However, at partial load condition in pumping mode, the occurrence of rotating stall is critical to the operational safety of the machine and on the grid stability. The understanding of the mechanisms behind this flow phenomenon yet remains vague and incomplete. Past numerical simulations using a RANS approach often led to inconclusive results concerning the physical background. For the first time, the rotating stall is investigated by performing a large scale LES calculation on the HYDRODYNA pump-turbine scale model featuring approximately 100 million elements. The computations were performed on the PRIMEHPC FX10 of the University of Tokyo using the overset Finite Element open source code FrontFlow/blue with the dynamic Smagorinsky turbulence model and the no-slip wall condition. The internal flow computed is the one when operating the pump-turbine at 76% of the best efficiency point in pumping mode, as previous experimental research showed the presence of four rotating cells. The rotating stall phenomenon is accurately reproduced for a reduced Reynolds number using the LES approach with acceptable computing resources. The results show an excellent agreement with available experimental data from the reduced scale model testing at the EPFL Laboratory for Hydraulic Machines. The number of stall cells as well as the propagation speed corroborates the experiment.

  9. LES on unstructured deforming meshes: Towards reciprocating IC engines

    NASA Technical Reports Server (NTRS)

    Haworth, D. C.; Jansen, K.

    1996-01-01

    A variable explicit/implicit characteristics-based advection scheme that is second-order accurate in space and time has been developed recently for unstructured deforming meshes (O'Rourke & Sahota 1996a). To explore the suitability of this methodology for Large-Eddy Simulation (LES), three subgrid-scale turbulence models have been implemented in the CHAD CFD code (O'Rourke & Sahota 1996b): a constant-coefficient Smagorinsky model, a dynamic Smagorinsky model for flows having one or more directions of statistical homogeneity, and a Lagrangian dynamic Smagorinsky model for flows having no spatial or temporal homogeneity (Meneveau et al. 1996). Computations have been made for three canonical flows, progressing towards the intended application of in-cylinder flow in a reciprocating engine. Grid sizes were selected to be comparable to the coarsest meshes used in earlier spectral LES studies. Quantitative results are reported for decaying homogeneous isotropic turbulence, and for a planar channel flow. Computations are compared to experimental measurements, to Direct-Numerical Simulation (DNS) data, and to Rapid-Distortion Theory (RDT) where appropriate. Generally satisfactory evolution of first and second moments is found on these coarse meshes; deviations are attributed to insufficient mesh resolution. Issues include mesh resolution and computational requirements for a specified level of accuracy, analytic characterization of the filtering implied by the numerical method, wall treatment, and inflow boundary conditions. To resolve these issues, finer-mesh simulations and computations of a simplified axisymmetric reciprocating piston-cylinder assembly are in progress.

  10. Large-Eddy Simulation Sensitivities to Variations of Configuration and Forcing Parameters in Canonical Boundary-Layer Flows for Wind Energy Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirocha, Jeffrey D.; Churchfield, Matthew J.; Munoz-Esparza, Domingo

    Here, the sensitivities of idealized Large-Eddy Simulations (LES) to variations of model configuration and forcing parameters on quantities of interest to wind power applications are examined. Simulated wind speed, turbulent fluxes, spectra and cospectra are assessed in relation to variations of two physical factors, geostrophic wind speed and surface roughness length, and several model configuration choices, including mesh size and grid aspect ratio, turbulence model, and numerical discretization schemes, in three different code bases. Two case studies representing nearly steady neutral and convective atmospheric boundary layer (ABL) flow conditions over nearly flat and homogeneous terrain were used to force andmore » assess idealized LES, using periodic lateral boundary conditions. Comparison with fast-response velocity measurements at five heights within the lowest 50 m indicates that most model configurations performed similarly overall, with differences between observed and predicted wind speed generally smaller than measurement variability. Simulations of convective conditions produced turbulence quantities and spectra that matched the observations well, while those of neutral simulations produced good predictions of stress, but smaller than observed magnitudes of turbulence kinetic energy, likely due to tower wakes influencing the measurements. While sensitivities to model configuration choices and variability in forcing can be considerable, idealized LES are shown to reliably reproduce quantities of interest to wind energy applications within the lower ABL during quasi-ideal, nearly steady neutral and convective conditions over nearly flat and homogeneous terrain.« less

  11. Recent Progress in Electromagnetic Absorption and Dosimetry in Biological Systems.

    DTIC Science & Technology

    1978-12-21

    AEROSPACE M!DICAL RESEARCH LABORATORY NAVAL AIR STATION PENSACOLA, FLORIDA 32508 L4 oj6L I SUMMARY PAGE Ti9(PROSLEM Dosimetry , as a subset of research In...absonce of sound dosimetry design, lacks credibility. This study provides a usable orientation in present and future dosimetric technology through a...leading experiment; while at other times experimental results lead the way. Progress In absorption and dosimetry Is still urderway, and higher degrees

  12. Nonuniform Irradiation of the Canine Intestine. 2. Dosimetry

    DTIC Science & Technology

    1990-01-01

    irradiation is accurate assessment In vivo dosimetry was done using Harshaw (Solon, Ohio) TLD - 100 lith- of the injury after either accidental or... vivo TLD dosimetry system allowed measure- 5 and 6. The dose was determined from the median TLD ment of the °Co dose deposited in the canine small...provide replicate measurements. Two separate dosimetry tubes were deveoped (Fig. 1). The first contained 30 TLD cap- doses (1). Nevertheless, current

  13. Thermoluminescence Dosimetry (TLD) and its Application in Medical Physics

    NASA Astrophysics Data System (ADS)

    Azorín Nieto, Juan

    2004-09-01

    Radiation dosimetry is fundamental in Medical Physics, involving patients and phantom dosimetry. In both cases thermoluminescence dosimetry (TLD) is the most appropriate technique for measuring the absorbed dose. In this paper thermoluminescence phenomenon as well as the use of TLD in radiodiagnosis and radiotherapy for in vivo or in phantom measurements is discussed. Some results of measurements made in radiotherapy and radiodiagnosis using home made LiF:Mg,Cu,P+PTFE TLD are presented.

  14. [Verification of the dose delivered to the patient by means of TLD, SC, PID. What future?].

    PubMed

    Noël, A

    2003-11-01

    Among the different possibilities to check the accuracy of the treatment delivered, only in vivo dosimetry ensures the precision of the dose delivered to the patient during the treatment. In 1970-1980, Ruden assessed the use of thermoluminescent dosimetry to perform in vivo measurements at Radiumemmet in Stockholm. Straightforward in its principle but demanding in its implementation, thermoluminescent dosimetry has largely been used. Today, thanks to the work of Rikner, the use of semiconductor detectors allows the general implementation of in vivo dosimetry. Tomorrow, we will use electronic portal imaging device to verify the geometrical patient setup and the dose delivery at the same time. Its implementation remains complex and will need the development of algorithms to compute exit dose or midplane dose using portal in vivo dosimetry. First clinical results show that portal imaging is an accurate alternative for conventional in vivo dosimetry using diodes.

  15. EURADOS strategic research agenda: vision for dosimetry of ionising radiation

    PubMed Central

    Rühm, W.; Fantuzzi, E.; Harrison, R.; Schuhmacher, H.; Vanhavere, F.; Alves, J.; Bottollier Depois, J. F.; Fattibene, P.; Knežević, Ž.; Lopez, M. A.; Mayer, S.; Miljanić, S.; Neumaier, S.; Olko, P.; Stadtmann, H.; Tanner, R.; Woda, C.

    2016-01-01

    Since autumn 2012, the European Radiation Dosimetry Group (EURADOS) has been developing its Strategic Research Agenda (SRA), which is intended to contribute to the identification of future research needs in radiation dosimetry in Europe. The present article summarises—based on input from EURADOS Working Groups (WGs) and Voting Members—five visions in dosimetry and defines key issues in dosimetry research that are considered important for the next decades. The five visions include scientific developments required towards (a) updated fundamental dose concepts and quantities, (b) improved radiation risk estimates deduced from epidemiological cohorts, (c) efficient dose assessment for radiological emergencies, (d) integrated personalised dosimetry in medical applications and (e) improved radiation protection of workers and the public. The SRA of EURADOS will be used as a guideline for future activities of the EURADOS WGs. A detailed version of the SRA can be downloaded as a EURADOS report from the EURADOS website (www.eurados.org). PMID:25752758

  16. Technical basis for external dosimetry at the Waste Isolation Pilot Plant (WIPP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, E.W.; Wu, C.F.; Goff, T.E.

    1993-12-31

    The WIPP External Dosimetry Program, administered by Westinghouse Electric Corporation, Waste Isolation Division, for the US Department of Energy (DOE), provides external dosimetry support services for operations at the Waste Isolation Pilot Plant (WIPP) Site. These operations include the receipt, experimentation with, storage, and disposal of transuranic (TRU) wastes. This document describes the technical basis for the WIPP External Radiation Dosimetry Program. The purposes of this document are to: (1) provide assurance that the WIPP External Radiation Dosimetry Program is in compliance with all regulatory requirements, (2) provide assurance that the WIPP External Radiation Dosimetry Program is derived from amore » sound technical base, (3) serve as a technical reference for radiation protection personnel, and (4) aid in identifying and planning for future needs. The external radiation exposure fields are those that are documented in the WIPP Final Safety Analysis Report.« less

  17. Effects of body habitus on internal radiation dose calculations using the 5-year-old anthropomorphic male models.

    PubMed

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-07-13

    Computational phantoms are commonly used in internal radiation dosimetry to assess the amount and distribution pattern of energy deposited in various parts of the human body from different internal radiation sources. Radiation dose assessments are commonly performed on predetermined reference computational phantoms while the argument for individualized patient-specific radiation dosimetry exists. This study aims to evaluate the influence of body habitus on internal dosimetry and to quantify the uncertainties in dose estimation correlated with the use of fixed reference models. The 5-year-old IT'IS male phantom was modified to match target anthropometric parameters, including body weight, body height and sitting height/stature ratio (SSR), determined from reference databases, thus enabling the creation of 125 5-year-old habitus-dependent male phantoms with 10th, 25th, 50th, 75th and 90th percentile body morphometries. We evaluated the absorbed fractions and the mean absorbed dose to the target region per unit cumulative activity in the source region (S-values) of F-18 in 46 source regions for the generated 125 anthropomorphic 5-year-old hybrid male phantoms using the Monte Carlo N-Particle eXtended general purpose Monte Carlo transport code and calculated the absorbed dose and effective dose of five 18 F-labelled radiotracers for children of various habitus. For most organs, the S-value of F-18 presents stronger statistical correlations with body weight, standing height and sitting height than BMI and SSR. The self-absorbed fraction and self-absorbed S-values of F-18 and the absorbed dose and effective dose of 18 F-labelled radiotracers present with the strongest statistical correlations with body weight. For 18 F-Amino acids, 18 F-Brain receptor substances, 18 F-FDG, 18 F-L-DOPA and 18 F-FBPA, the mean absolute effective dose differences between phantoms of different habitus and fixed reference models are 11.4%, 11.3%, 10.8%, 13.3% and 11.4%, respectively. Total body weight, standing height and sitting height have considerable effects on human internal dosimetry. Radiation dose calculations for individual subjects using the most closely matched habitus-dependent computational phantom should be considered as an alternative to improve the accuracy of the estimates.

  18. Assessment of PCXMC for patients with different body size in chest and abdominal x ray examinations: a Monte Carlo simulation study.

    PubMed

    Borrego, David; Lowe, Erin M; Kitahara, Cari M; Lee, Choonsik

    2018-03-21

    A PC Program for x ray Monte Carlo (PCXMC) has been used to calculate organ doses in patient dosimetry and for the exposure assessment in epidemiological studies of radiogenic health related risks. This study compared the dosimetry from using the built-in stylized phantoms in the PCXMC to that of a newer hybrid phantom library with improved anatomical realism. We simulated chest and abdominal x ray projections for 146 unique body size computational phantoms, 77 males and 69 females, with different combinations of height (125-180 cm) and weight (20-140 kg) using the built-in stylized phantoms in the PCXMC version 2.0.1.4 and the hybrid phantom library using the Monte Carlo N-particle eXtended transport code 2.7 (MCNPX). Unfortunately, it was not possible to incorporate the hybrid phantom library into the PCXMC. We compared 14 organ doses, including dose to the active bone marrow, to evaluate differences between the built-in stylized phantoms in the PCXMC and the hybrid phantoms (Cristy and Eckerman 1987 Technical Report ORNL/TM-8381/V1, Oak Ridge National Laboratory, Eckerman and Ryman 1993 Technical Report 12 Oak Ridge, TN, Geyer et al 2014 Phys. Med. Biol. 59 5225-42). On average, organ doses calculated using the built-in stylized phantoms in the PCXMC were greater when compared to the hybrid phantoms. This is most prominent in AP abdominal exams by an average factor of 2.4-, 2.8-, and 2.8-fold for the 10-year-old, 15-year-old, and adult phantoms, respectively. For chest exams, organ doses are greater by an average factor of 1.1-, 1.4-, and 1.2-fold for the 10-year-old, 15-year-old, and adult phantoms, respectively. The PCXMX, due to its ease of use, is often selected to support dosimetry in epidemiological studies; however, it uses simplified models of the human anatomy that fail to account for variations in body morphometry for increasing weight. For epidemiological studies that use PCXMC dosimetry, associations between radiation-related disease risks and organ doses may be underestimated, and to a greater degree in pediatric, especially obese pediatric, compared to adult patients.

  19. On the p(dis) correction factor for cylindrical chambers.

    PubMed

    Andreo, Pedro

    2010-03-07

    The authors of a recent paper (Wang and Rogers 2009 Phys. Med. Biol. 54 1609) have used the Monte Carlo method to simulate the 'classical' experiment made more than 30 years ago by Johansson et al (1978 National and International Standardization of Radiation Dosimetry (Atlanta 1977) vol 2 (Vienna: IAEA) pp 243-70) on the displacement (or replacement) perturbation correction factor p(dis) for cylindrical chambers in 60Co and high-energy photon beams. They conclude that an 'unreasonable normalization at dmax' of the ionization chambers response led to incorrect results, and for the IAEA TRS-398 Code of Practice, which uses ratios of those results, 'the difference in the correction factors can lead to a beam calibration deviation of more than 0.5% for Farmer-like chambers'. The present work critically examines and questions some of the claims and generalized conclusions of the paper. It is demonstrated that for real, commercial Farmer-like chambers, the possible deviations in absorbed dose would be much smaller (typically 0.13%) than those stated by Wang and Rogers, making the impact of their proposed values negligible on practical high-energy photon dosimetry. Differences of the order of 0.4% would only appear at the upper extreme of the energies potentially available for clinical use (around 25 MV) and, because lower energies are more frequently used, the number of radiotherapy photon beams for which the deviations would be larger than say 0.2% is extremely small. This work also raises concerns on the proposed value of pdis for Farmer chambers at the reference quality of 60Co in relation to their impact on electron beam dosimetry, both for direct dose determination using these chambers and for the cross-calibration of plane-parallel chambers. The proposed increase of about 1% in p(dis) (compared with TRS-398) would lower the kQ factors and therefore Dw in electron beams by the same amount. This would yield a severe discrepancy with the current good agreement between electron dosimetry based on an electron cross-calibrated plane-parallel chamber (against a Farmer) or on a directly 60Co calibrated plane-parallel chamber, which is not likely to be in error by 1%. It is suggested that the influence of the 60Co source spectrum used in the simulations may not be negligible for calculations aimed at an uncertainty level of 0.1%.

  20. Assessment of PCXMC for patients with different body size in chest and abdominal x ray examinations: a Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Borrego, David; Lowe, Erin M.; Kitahara, Cari M.; Lee, Choonsik

    2018-03-01

    A PC Program for x ray Monte Carlo (PCXMC) has been used to calculate organ doses in patient dosimetry and for the exposure assessment in epidemiological studies of radiogenic health related risks. This study compared the dosimetry from using the built-in stylized phantoms in the PCXMC to that of a newer hybrid phantom library with improved anatomical realism. We simulated chest and abdominal x ray projections for 146 unique body size computational phantoms, 77 males and 69 females, with different combinations of height (125–180 cm) and weight (20–140 kg) using the built-in stylized phantoms in the PCXMC version 2.0.1.4 and the hybrid phantom library using the Monte Carlo N-particle eXtended transport code 2.7 (MCNPX). Unfortunately, it was not possible to incorporate the hybrid phantom library into the PCXMC. We compared 14 organ doses, including dose to the active bone marrow, to evaluate differences between the built-in stylized phantoms in the PCXMC and the hybrid phantoms (Cristy and Eckerman 1987 Technical Report ORNL/TM-8381/V1, Oak Ridge National Laboratory, Eckerman and Ryman 1993 Technical Report 12 Oak Ridge, TN, Geyer et al 2014 Phys. Med. Biol. 59 5225–42). On average, organ doses calculated using the built-in stylized phantoms in the PCXMC were greater when compared to the hybrid phantoms. This is most prominent in AP abdominal exams by an average factor of 2.4-, 2.8-, and 2.8-fold for the 10-year-old, 15-year-old, and adult phantoms, respectively. For chest exams, organ doses are greater by an average factor of 1.1-, 1.4-, and 1.2-fold for the 10-year-old, 15-year-old, and adult phantoms, respectively. The PCXMX, due to its ease of use, is often selected to support dosimetry in epidemiological studies; however, it uses simplified models of the human anatomy that fail to account for variations in body morphometry for increasing weight. For epidemiological studies that use PCXMC dosimetry, associations between radiation-related disease risks and organ doses may be underestimated, and to a greater degree in pediatric, especially obese pediatric, compared to adult patients.

  1. Effects of body habitus on internal radiation dose calculations using the 5-year-old anthropomorphic male models

    NASA Astrophysics Data System (ADS)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-08-01

    Computational phantoms are commonly used in internal radiation dosimetry to assess the amount and distribution pattern of energy deposited in various parts of the human body from different internal radiation sources. Radiation dose assessments are commonly performed on predetermined reference computational phantoms while the argument for individualized patient-specific radiation dosimetry exists. This study aims to evaluate the influence of body habitus on internal dosimetry and to quantify the uncertainties in dose estimation correlated with the use of fixed reference models. The 5-year-old IT’IS male phantom was modified to match target anthropometric parameters, including body weight, body height and sitting height/stature ratio (SSR), determined from reference databases, thus enabling the creation of 125 5-year-old habitus-dependent male phantoms with 10th, 25th, 50th, 75th and 90th percentile body morphometries. We evaluated the absorbed fractions and the mean absorbed dose to the target region per unit cumulative activity in the source region (S-values) of F-18 in 46 source regions for the generated 125 anthropomorphic 5-year-old hybrid male phantoms using the Monte Carlo N-Particle eXtended general purpose Monte Carlo transport code and calculated the absorbed dose and effective dose of five 18F-labelled radiotracers for children of various habitus. For most organs, the S-value of F-18 presents stronger statistical correlations with body weight, standing height and sitting height than BMI and SSR. The self-absorbed fraction and self-absorbed S-values of F-18 and the absorbed dose and effective dose of 18F-labelled radiotracers present with the strongest statistical correlations with body weight. For 18F-Amino acids, 18F-Brain receptor substances, 18F-FDG, 18F-L-DOPA and 18F-FBPA, the mean absolute effective dose differences between phantoms of different habitus and fixed reference models are 11.4%, 11.3%, 10.8%, 13.3% and 11.4%, respectively. Total body weight, standing height and sitting height have considerable effects on human internal dosimetry. Radiation dose calculations for individual subjects using the most closely matched habitus-dependent computational phantom should be considered as an alternative to improve the accuracy of the estimates.

  2. LETTER TO THE EDITOR: On the pdis correction factor for cylindrical chambers

    NASA Astrophysics Data System (ADS)

    Andreo, Pedro

    2010-03-01

    The authors of a recent paper (Wang and Rogers 2009 Phys. Med. Biol. 54 1609) have used the Monte Carlo method to simulate the 'classical' experiment made more than 30 years ago by Johansson et al (1978 National and International Standardization of Radiation Dosimetry (Atlanta 1977) vol 2 (Vienna: IAEA) pp 243-70) on the displacement (or replacement) perturbation correction factor pdis for cylindrical chambers in 60Co and high-energy photon beams. They conclude that an 'unreasonable normalization at dmax' of the ionization chambers response led to incorrect results, and for the IAEA TRS-398 Code of Practice, which uses ratios of those results, 'the difference in the correction factors can lead to a beam calibration deviation of more than 0.5% for Farmer-like chambers'. The present work critically examines and questions some of the claims and generalized conclusions of the paper. It is demonstrated that for real, commercial Farmer-like chambers, the possible deviations in absorbed dose would be much smaller (typically 0.13%) than those stated by Wang and Rogers, making the impact of their proposed values negligible on practical high-energy photon dosimetry. Differences of the order of 0.4% would only appear at the upper extreme of the energies potentially available for clinical use (around 25 MV) and, because lower energies are more frequently used, the number of radiotherapy photon beams for which the deviations would be larger than say 0.2% is extremely small. This work also raises concerns on the proposed value of pdis for Farmer chambers at the reference quality of 60Co in relation to their impact on electron beam dosimetry, both for direct dose determination using these chambers and for the cross-calibration of plane-parallel chambers. The proposed increase of about 1% in pdis (compared with TRS-398) would lower the kQ factors and therefore Dw in electron beams by the same amount. This would yield a severe discrepancy with the current good agreement between electron dosimetry based on an electron cross-calibrated plane-parallel chamber (against a Farmer) or on a directly 60Co calibrated plane-parallel chamber, which is not likely to be in error by 1%. It is suggested that the influence of the 60Co source spectrum used in the simulations may not be negligible for calculations aimed at an uncertainty level of 0.1%.

  3. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE PAGES

    Xia, Yidong; Wang, Chuanjin; Luo, Hong; ...

    2015-12-15

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  4. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  5. Clinical implementation and rapid commissioning of an EPID based in-vivo dosimetry system.

    PubMed

    Hanson, Ian M; Hansen, Vibeke N; Olaciregui-Ruiz, Igor; van Herk, Marcel

    2014-10-07

    Using an Electronic Portal Imaging Device (EPID) to perform in-vivo dosimetry is one of the most effective and efficient methods of verifying the safe delivery of complex radiotherapy treatments. Previous work has detailed the development of an EPID based in-vivo dosimetry system that was subsequently used to replace pre-treatment dose verification of IMRT and VMAT plans. Here we show that this system can be readily implemented on a commercial megavoltage imaging platform without modification to EPID hardware and without impacting standard imaging procedures. The accuracy and practicality of the EPID in-vivo dosimetry system was confirmed through a comparison with traditional TLD in-vivo measurements performed on five prostate patients.The commissioning time required for the EPID in-vivo dosimetry system was initially prohibitive at approximately 10 h per linac. Here we present a method of calculating linac specific EPID dosimetry correction factors that allow a single energy specific commissioning model to be applied to EPID data from multiple linacs. Using this method reduced the required per linac commissioning time to approximately 30 min.The validity of this commissioning method has been tested by analysing in-vivo dosimetry results of 1220 patients acquired on seven linacs over a period of 5 years. The average deviation between EPID based isocentre dose and expected isocentre dose for these patients was (-0.7  ±  3.2)%.EPID based in-vivo dosimetry is now the primary in-vivo dosimetry tool used at our centre and has replaced nearly all pre-treatment dose verification of IMRT treatments.

  6. Clinical implementation and rapid commissioning of an EPID based in-vivo dosimetry system

    NASA Astrophysics Data System (ADS)

    Hanson, Ian M.; Hansen, Vibeke N.; Olaciregui-Ruiz, Igor; van Herk, Marcel

    2014-10-01

    Using an Electronic Portal Imaging Device (EPID) to perform in-vivo dosimetry is one of the most effective and efficient methods of verifying the safe delivery of complex radiotherapy treatments. Previous work has detailed the development of an EPID based in-vivo dosimetry system that was subsequently used to replace pre-treatment dose verification of IMRT and VMAT plans. Here we show that this system can be readily implemented on a commercial megavoltage imaging platform without modification to EPID hardware and without impacting standard imaging procedures. The accuracy and practicality of the EPID in-vivo dosimetry system was confirmed through a comparison with traditional TLD in-vivo measurements performed on five prostate patients. The commissioning time required for the EPID in-vivo dosimetry system was initially prohibitive at approximately 10 h per linac. Here we present a method of calculating linac specific EPID dosimetry correction factors that allow a single energy specific commissioning model to be applied to EPID data from multiple linacs. Using this method reduced the required per linac commissioning time to approximately 30 min. The validity of this commissioning method has been tested by analysing in-vivo dosimetry results of 1220 patients acquired on seven linacs over a period of 5 years. The average deviation between EPID based isocentre dose and expected isocentre dose for these patients was (-0.7  ±  3.2)%. EPID based in-vivo dosimetry is now the primary in-vivo dosimetry tool used at our centre and has replaced nearly all pre-treatment dose verification of IMRT treatments.

  7. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  8. Transport de Particules et Atmospheres D'etoiles Magnetiques

    NASA Astrophysics Data System (ADS)

    LeBlanc, Francis

    1995-01-01

    Les phenomenes relies a la diffusion atomique dans les etoiles sont etudies de facon intensive depuis environ un quart de siecle. La diffusion peut a la fois modifier les abondances atomiques presentes ainsi qu'affecter la structure et l'evolution stellaires. Dans cette these, nous allons etudier trois phenomenes physiques relies a la diffusion. Nous avons developpe la theorie de la derive induite par la radiation afin qu'elle soit facilement applicable dans le contexte de l'astrophysique stellaire. Des calcuis detailles furent effectues afin d'evaluer l'importance de cet effet sur la diffusion relative de l'^3 He et l'^4He et montrent que la derive induite par la radiation accelere la separation de ces deux isotopes dans une etoile de temperature effective de 18000 K. Lorsque l'^4He est present, ce phenomene augmente la vitesse de derive de l'^3He qui migre vers l'exterieur ce qui fait apparai tre la surabondance de cet isotope plus tot dans l'evolution. Des calculs sur le lithium a la base de la zone convective d'une etoile avec une temperature effective de 6700 K monuent que la derive induite par la radiation n'est pas importante dans ce cas. Ce phenomene semble aussi etre negligeable pour l'oxygene dans les etoiles de type A ainsi que pour le mercure dans les etoiles de type B. Deuxiemement nous avons construit des modeles d'atmospheres d'etoiles ayant un champ magnetique horizontal et constant en incluant l'interaction entre ce champ et la diffusion ambipolaire de l'hydrogene. Cette interaction cause une compression de la zone d'ionisation de l'hydrogene. Dans un modele de temperature effective de 10,000 K, et avec log g = 4.0 la gravite effective, c'est-a-dire la gravite plus l'acceleration causee par la force de Lorentz, en presence d'un champ magnetique de 5 kG est sept fois plus grande que la gravite. Ce phenomene affecte donc fortement la structure des etoiles Ap. Cette modification de la structure des etoiles magnetiques cause un plus grand elargissement des raies de Balmer de l'hydrogene. Puisque le champ magnetique observe n'est pas uniforme a la surface des etoiles Ap, la modification de la structure causee par l'interaction entre la diffusion ambipolaire de l'hydrogene et le champ magnetique engendre une variation de l'elargissement des raies de Balmer durant une periode de rotation. La variation causee par ce phenomene est inferieure aux variations observees. D'autres facteurs tels que des gradients horizontaux et verticaux de la metallicite et de la configuration du champ magnetique peuvent aussi influencer la variation des raies de Balmer. Des ameliorations majeures furent apportees au calcul des accelerations radiatives. Grace a des bases de donnees plus completes, il est maintenant possible de calculer l'acceleration causee par la photoionisation. De plus nous avons calcule de maniere approximative l'opacite monochromatique totale qui est un ingredient essentiel au calcul de l'acceleration radiative. Des ameliorations concernant l'elargissement des raies et la distribution de l'acceleration entre les divers ions d'un element furent aussi incluses. Des calculs detailles de l'acceleration radiative sur le fer montrent qu'une abondance consistente avec les observations peut etre supportee dans les etoiles de type A et F. L'abondance de fer supportee depend de la temperature effective et de la gravite de surface de l'etoile. Les accelerations radiatives ont ete tabulees afin d'etre facilement utilisables dans des codes d'evolution stellaire.

  9. Canadian Thesis Abstracts: Synthèse spectrale de jeunes populations stellaires dans; l'ultraviolet lointain

    NASA Astrophysics Data System (ADS)

    Pellerin, Anne

    2005-02-01

    Le but de cette thèse était de développer et tester la technique de synthè;se spectrale évolutive aux longueurs d'onde de l'ultraviolet lointain. Jusquà récemment, cette technique n'était appliqué quà des données au-delà de 1200 Å. Le lancement du satellite FUSE en 1999 a permis d'explorer le domaine de l'ultraviolet lointain (900-1200 Å) avec une grande résolution spectrale. J'ai donc utilisé les spectres du satellite FUSE de 228 étoiles chaudes de type O et B, de 24 galaxies à sursauts de formation d'étoiles et de quatre galaxies Seyfert. Dans un premier temps, j'ai caractérisé le comportement des profils de raies stellaires en fonction du type spectral, de la classe de luminosité et de la métallicité des étoiles. Les raies O vi >>1031.9, 1037.6, S iv >>1062.7, 1073.0, 1073.5, P v>>1118.0, 1128.0 et C iii >1175.6 ont été identifiées comme étant des indicateurs stellaires potentiellement intéressants pour la synthèse spectrale. Le domaine de longueur d'onde inférieur à 1000 Å couvert par FUSEmontre aussi des signatures stellaires mais qui sont peu intéressantes pour la synthèse en raison de la contamination interstellaire. J'ai ensuite crééé; une bibliothèque de spectres FUSE qui a été intégrée au code de synthèse LavalSBafin de produire des spectres de synthèse dans l'ultraviolet lointain pour diverses populations stellaires théoriques. Il s'est avéré que les raies de P vet de C iii sont d'excellents indicateurs d'âge, de métallicité et de fonction de masse initiale de la population stellaire, tandis que les raies de O vi et de S ivne sont pas aussi efficaces. La comparaison des spectres FUSEde galaxies avec les spectres synthétiques a révèlé des âges entre 2.5 et 18 millions d'années pour un large éventail de métallicités. On trouve aussi une forte dominance du mode instantané de formation stellaire. Ce travail a aussi permis d'estimer quantitativement l'extinction interne et les masses stellaires impliquées dans les sursauts. La synthèse des raies de l'ultraviolet lointain s'est avérée beaucoup plus précise que la synthèse à > 1200 Å en raison de la résolution spectrale exceptionnelle de FUSE et parce que les raies stellaires n'ont pas de profils saturés, même aux métallicités élevées. Les propriétés physiques globales des 24 galaxies à sursauts ont aussi été étudiées dans leur ensemble afin de mieux décrire le phénomène des sursauts de formation stellaires.

  10. SIERRA/Aero Theory Manual Version 4.46.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal/Fluid Team

    2017-09-01

    SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less

  11. SIERRA/Aero Theory Manual Version 4.44

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal /Fluid Team

    2017-04-01

    SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less

  12. Second-order closure PBL model with new third-order moments: Comparison with LES data

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Minotti, F.; Ronchi, C.; Ypma, R. M.; Zeman, O.

    1994-01-01

    This paper contains two parts. In the first part, a new set of diagnostic equations is derived for the third-order moments for a buoyancy-driven flow, by exact inversion of the prognostic equations for the third-order moment equations in the stationary case. The third-order moments exhibit a universal structure: they all are a linear combination of the derivatives of all the second-order moments, bar-w(exp 2), bar-w theta, bar-theta(exp 2), and bar-q(exp 2). Each term of the sum contains a turbulent diffusivity D(sub t), which also exhibits a universal structure of the form D(sub t) = a nu(sub t) + b bar-w theta. Since the sign of the convective flux changes depending on stable or unstable stratification, D(sub t) varies according to the type of stratification. Here nu(sub t) approximately equal to wl (l is a mixing length and w is an rms velocity) represents the 'mechanical' part, while the 'buoyancy' part is represented by the convective flux bar-w theta. The quantities a and b are functions of the variable N(sub tau)(exp 2), where N(exp 2) = g alpha derivative of Theta with respect to z and tau is the turbulence time scale. The new expressions for the third-order moments generalize those of Zeman and Lumley, which were subsequently adopted by Sun and Ogura, Chen and Cotton, and Finger and Schmidt in their treatments of the convective boundary layer. In the second part, the new expressions for the third-order moments are used to solve the ensemble average equations describing a purely convective boundary laye r heated from below at a constant rate. The computed second- and third-order moments are then compared with the corresponding Large Eddy Simulation (LES) results, most of which are obtained by running a new LES code, and part of which are taken from published results. The ensemble average results compare favorably with the LES data.

  13. Détention provisoire des jeunes femmes accusées d'avortement clandestin ou d'infanticide au Sénégal

    PubMed Central

    Soumah, Mohamed Maniboliot; Pemba, Liliane Flore

    2012-01-01

    Introduction L'activité sexuelle chez les jeunes les expose à un accroissement du risque de contracter des grossesses non désirées. Le recours à l'avortement clandestin avec son corollaire de complications peut entrainer le décès de la jeune femme. Avortement et infanticide sont interdits et sanctionnés par la loi sénégalaise. Comment ces jeunes femmes vivent-elles leur détention? Existe-il des alternatives à la détention pour éviter leur désocialisation? Méthodes Cette étude rétrospective portait sur la maison d'arrêt des femmes de Dakar située à Liberté 6, un quartier de Dakar. Nous avons procédé à des entretiens avec des femmes détenues à la maison d'arrêt des femmes de Dakar et suspectées d'infanticide ou d'avortement clandestin. Résultats Les femmes de notre échantillon ont une moyenne d’âge inférieure à 25 ans avec parmi elles une fille mineure de 16 ans. Nous avons trouvé 18,51% de femmes suspectées d'infanticide ou d'avortement. Dans notre étude 50% des femmes sont originaires de la périphérie et de la banlieue de Dakar et presque 44% proviennent des autres régions du pays. La durée moyenne de détention provisoire est de neuf mois. Conclusion Malgré leur qualification distincte dans le code pénal: l'infanticide est un crime et l'avortement un délit, les femmes suspectées d'avoir commis ces actes sont soumises à de longues détentions préventives. PMID:22937189

  14. Comparison of Organ Dosimetry for Astronaut Phantoms: Earth-Based vs. Microgravity-Based Anthropometry and Body Positioning

    NASA Technical Reports Server (NTRS)

    VanBaalen, Mary; Bahadon, Amir; Shavers, Mark; Semones, Edward

    2011-01-01

    The purpose of this study is to use NASA radiation transport codes to compare astronaut organ dose equivalents resulting from solar particle events (SPE), geomagnetically trapped protons, and free-space galactic cosmic rays (GCR) using phantom models representing Earth-based and microgravity-based anthropometry and positioning. Methods: The Univer sity of Florida hybrid adult phantoms were scaled to represent male and female astronauts with 5th, 50th, and 95th percentile heights and weights as measured on Earth. Another set of scaled phantoms, incorporating microgravity-induced changes, such as spinal lengthening, leg volume loss, and the assumption of the neutral body position, was also created. A ray-tracer was created and used to generate body self-shielding distributions for dose points within a voxelized phantom under isotropic irradiation conditions, which closely approximates the free-space radiation environment. Simplified external shielding consisting of an aluminum spherical shell was used to consider the influence of a spacesuit or shielding of a hull. These distributions were combined with depth dose distributions generated from the NASA radiation transport codes BRYNTRN (SPE and trapped protons) and HZETRN (GCR) to yield dose equivalent. Many points were sampled per organ. Results: The organ dos e equivalent rates were on the order of 1.5-2.5 mSv per day for GCR (1977 solar minimum) and 0.4-0.8 mSv per day for trapped proton irradiation with shielding of 2 g cm-2 aluminum equivalent. The organ dose equivalents for SPE irradiation varied considerably, with the skin and eye lens having the highest organ dose equivalents and deep-seated organs, such as the bladder, liver, and stomach having the lowest. Conclus ions: The greatest differences between the Earth-based and microgravity-based phantoms are observed for smaller ray thicknesses, since the most drastic changes involved limb repositioning and not overall phantom size. Improved self-shielding models reduce the overall uncertainty in organ dosimetry for mission-risk projections and assessments for astronauts

  15. Consistency evaluation between EGSnrc and Geant4 charged particle transport in an equilibrium magnetic field.

    PubMed

    Yang, Y M; Bednarz, B

    2013-02-21

    Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.

  16. Consistency evaluation between EGSnrc and Geant4 charged particle transport in an equilibrium magnetic field

    NASA Astrophysics Data System (ADS)

    Yang, Y. M.; Bednarz, B.

    2013-02-01

    Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.

  17. [The use of polymer gel dosimetry to measure dose distribution around metallic implants].

    PubMed

    Nagahata, Tomomasa; Yamaguchi, Hajime; Monzen, Hajime; Nishimura, Yasumasa

    2014-10-01

    A semi-solid polymer dosimetry system using agar was developed to measure the dose distribution close to metallic implants. Dosimetry of heterogeneous fields where electron density markedly varies is often problematic. This prompted us to develop a polymer gel dosimetry technique using agar to measure the dose distribution near substance boundaries. Varying the concentration of an oxygen scavenger (tetra-hydroxymethyl phosphonium chloride) showed the absorbed dose and transverse relaxation rate of the magnetic resonance signal to be linear between 3 and 12 Gy. Although a change in the dosimeter due to oxidization was observed in room air after 24 hours, no such effects were observed in the first 4 hours. The dose distribution around the metal implants was measured using agar dosimetry. The metals tested were a lead rod, a titanium hip joint, and a metallic stent. A maximum 30% dose increase was observed near the lead rod, but only a 3% increase in the absorbed dose was noted near the surface of the titanium hip joint and metallic stent. Semi-solid polymer dosimetry using agar thus appears to be a useful method for dosimetry around metallic substances.

  18. Total Dose Effects of Ionizing and Non-Ionizing Radiation on Piezoresistive Pressure Transducer Chips

    DTIC Science & Technology

    2003-03-01

    facility and Mr. Joseph Talnagi of the Ohio State Research Reactor facility for their personal guidance and insight into reactor dosimetry and neutron...62 Test C1: Dosimetry ..................................................................................................... 63 Special...66 Annex A-3. Preliminary Dosimetry Calculations

  19. Neutron Fading Characteristics of Copper Doped Lithium Fluoride (LiF: MCP) Thermoluminescent Dosimeters (TLDs)

    DTIC Science & Technology

    2008-05-21

    Albedo Dosimetry TLDs that are used for neutron or neutron-photon personnel dosimetry are albedo dosimeters. The word albedo simply means the proportion... dosimetry . When LiF: MCP is exposed to thermal neutron irradiation, there is no obvious change in the glow curve shape. In the case of TLD -100, the...LiF: MCP undergoes compared to TLD -100. Therefore, LET results in significant variations in TL output for LiF: MCP. Limitations of Albedo Dosimetry

  20. Anthropomorphic Phantom Radiation Dosimetry at the NATO Standard Reference Point at Aberdeen Proving Ground,

    DTIC Science & Technology

    1987-04-01

    and would still be well under 10(C. .% % p., I V a- E p - -12 - IABLE 8 (a) TLD results for phantom dosimetry - all values shown are measured charge...SAI. Conclusions The current DREO dosimetry system-consisting of bubble, CR39 and TLD dosimeters - has proven capable of producing meaningful results at...MC FILE CoPy’ Defence nationale 00 ANTHROPOMORPHIC PHANTOM RADIATION DOSIMETRY AT THE NATO STANDARD OREFERENCE POINT AT ABERDEEN PROVING GROUND by T

  1. Le Phénomène Wolf-Rayet au Sein des Etoiles chaudes de Populations I et II: Histoire des Vents stellaires et Impact sur la Structure nébulaire circumstellaire

    NASA Astrophysics Data System (ADS)

    Grosdidier, Yves

    2000-12-01

    Les spectres des étoiles Wolf-Rayet pop. I (WR) présentent de larges raies en émission dues à des vents stellaires chauds en expansion rapide (vitesse terminale de l'ordre de 1000 km/s). Le modèle standard des étoiles WR reproduit qualitativement le profil général et l'intensité des raies observées. Mais la spectroscopie intensive à moyenne résolution de ces étoiles révèle l'existence de variations stochastiques dans les raies (sous-pics mobiles en accélération échelles de temps: environ 10-100 min.). Ces variations ne sont pas comprises dans le cadre du modèle standard et suggèrent une fragmentation intrinsèque des vents. Cette thèse de doctorat présente une étude de la variabilité des raies spectrales en émission des étoiles WR pop. II; la question de l'impact d'un vent WR fragmenté sur le milieu circumstellaire est aussi étudiée: 1) à partir du suivi spectroscopique intensif des raies CIIIl5696 et CIVl5801/12, nous analysons quantitativement (via le calcul des Spectres de Variance Temporelle) les vents issus de 5 étoiles centrales de nébuleuses planétaires (NP) galactiques présentant le phénomène WR; 2) nous étudions l'impact de la fragmentation des vents issus de deux étoiles WR pop. I sur le milieu circumstellaire via: i) l'imagerie IR (NICMOS2/HST) de WR 137, et ii) l'imagerie H-alpha (WFPC2/HST) et l'interférométrie Fabry-Perot H-alpha (SIS-CFHT) de la nébuleuse M 1-67 (étoile centrale: WR 124). Les principaux résultats sont les suivants: VENTS WR POP. II: (1) Nous démontrons la variabilité spectroscopique intrinsèque des vents issus des noyaux de NP HD 826 ([WC 8]), BD +30 3639 ([WC 9]) et LSS 3169 ([WC 9]), observés durant respectivement 22, 15 et 1 nuits, et rapportons des indications de variabilité pour les noyaux [WC 9] HD 167362 et He 2-142. Les variabilités de HD 826 et BD +30 3639 apparaissent parfois plus soutenues (``bursts'' qui se maintiennent durant plusieurs nuits); (2) La cinématique des sous-pics de BD +30 3639 suggère une anisotropie transitoire de la distribution des fragments dans le vent; (3) Le phénomène WR apparaît purement atmosphérique: la cinématique des sous-pics, les amplitudes et les échelles de temps caractéristiques des variations, ainsi que les accélérations observées sont similaires pour les deux populations. Mais, pour HD 826, une accélération maximale d'environ 70 m/s2 est détectée, valeur significativement plus importante que celles rapportées pour les autres étoiles WR pop. I & II (environ 15 m/s2). La petitesse du rayon de HD 826 en serait la cause; (4) Comme pour les WR pop. I, de grands paramètres (β supérieur ou égal à 3-10) sont requis pour ajuster les accélérations observées avec une loi de vitesse de type beta. La loi beta sous-estime systématiquement les gradients de vitesse au sein de la région de formation de la raie CIIIl5696; (5) Les vents WR pop. II étant fragmentés, l'estimation des taux de perte de masse actuels à partir de méthodes supposant les atmosphères homogènes conduit à une surestimation i) des taux de perte de masse eux-mêmes, et ii) des masses initiales des étoiles avant qu'elles n'entrent dans la phase WR. IMPACT DES VENTS: (1) Au périastre, de la poussière est détectée dans l'environnement de la binaire WC+OB WR 137. La formation de poussières est soit facilitée, soit provoquée par la collision des deux vents chauds; le rôle capital de la fragmentation des vents (fournissant une compression localisée supplémentaire du plasma) est suggéré (2) La nébuleuse M 1-67 affiche une interaction avec le milieu interstellaire (MIS) non-négligeable (``bow-shock''). Les champs de densité et de vitesse sont très perturbés. Ces perturbations sont reliées, d'une part, à l'histoire des vents issus de WR 124 durant sa propre évolution, d'autre part, à l'interaction avec le MIS. Les fonctions de structure des champs de densitéet de vitesse de M 1-67 ne révèlent aucun indice en faveur d'une turbulence au sein de la nébuleuse (3) Des simulations hydrodynamiques 2D réalisées avec le code ZEUS-3D montrent qu'un fragment dense formé près du coeur hydrostatique stellaire ne peut probablement pas, sans adjoindre les effets de bouclier et de confinement radiatifs, atteindre des distances nébulaires.

  2. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  3. The physics of small megavoltage photon beam dosimetry.

    PubMed

    Andreo, Pedro

    2018-02-01

    The increased interest during recent years in the use of small megavoltage photon beams in advanced radiotherapy techniques has led to the development of dosimetry recommendations by different national and international organizations. Their requirement of data suitable for the different clinical options available, regarding treatment units and dosimetry equipment, has generated a considerable amount of research by the scientific community during the last decade. The multiple publications in the field have led not only to the availability of new invaluable data, but have also contributed substantially to an improved understanding of the physics of their dosimetry. This work provides an overview of the most important aspects that govern the physics of small megavoltage photon beam dosimetry. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    NASA Astrophysics Data System (ADS)

    Gu, Songxiang; Gupta, Rajiv; Kyprianou, Iacovos

    2011-09-01

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user-defined stenoses, can be used to obtain clinically realistic projection images with the Monte Carlo code penMesh for optimizing imaging and dosimetry.

  5. Organ biodistribution of Germanium-68 in rat in the presence and absence of [68Ga]Ga-DOTA-TOC for the extrapolation to the human organ and whole-body radiation dosimetry

    PubMed Central

    Velikyan, Irina; Antoni, Gunnar; Sörensen, Jens; Estrada, Sergio

    2013-01-01

    Positron Emission Tomography (PET) and in particular gallium-68 (68Ga) applications are growing exponentially worldwide contributing to the expansion of nuclear medicine and personalized management of patients. The significance of 68Ga utility is reflected in the implementation of European Pharmacopoeia monographs. However, there is one crucial point in the monographs that might limit the use of the generators and consequently expansion of 68Ga applications and that is the limit of 0.001% of Germanium-68 (68Ge(IV)) radioactivity content in a radiopharmaceutical. We have investigated the organ distribution of 68Ge(IV) in rat and estimated human dosimetry parameters in order to provide experimental evidence for the determination and justification of the 68Ge(IV) limit. Male and female rats were injected in the tail vein with formulated [68Ge]GeCl4 in the absence or presence of [68Ga]Ga-DOTA-TOC. The tissue radioactivity distribution data was extrapolated for the estimation of human organ equivalent doses and total effective dose using Organ Level Internal Dose Assessment Code software (OLINDA/EXM). 68Ge(IV) was evenly distributed among the rat organs and fast renal excretion prevailed. Human organ equivalent dose and total effective dose estimates indicated that the kidneys were the dose-limiting organs (185±54 μSv/MBq for female and 171±38 μSv/MBq for male) and the total effective dose was 15.5±0.1 and 10.7±1.2 μSv/MBq, respectively for female and male. The results of this dosimetry study conclude that the 68Ge(IV) limit currently recommended by monographs could be increased considerably (>100 times) without exposing the patient to harm given the small absorbed doses to normal organs and fast excretion. PMID:23526484

  6. SU-E-QI-15: Single Point Dosimetry by Means of Cerenkov Radiation Energy Transfer (CRET)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volotskova, O; Jenkins, C; Xing, L

    2014-06-15

    Purpose: Cerenkov light is generated when a charged particles with energy greater then 250 keV, moves faster than the speed of light in a given medium. Both x-ray photons and electrons produce optical Cerenkov photons during the static megavoltage linear accelerator (LINAC) operational mode. Recently, Cerenkov radiation gained considerable interest as possible candidate as a new imaging modality. Optical signals generated by Cerenkov radiation may act as a surrogate for the absorbed superficial radiation dose. We demonstrated a novel single point dosimetry method for megavoltage photon and electron therapy utilizing down conversion of Cerenkov photons. Methods: The custom build signalmore » characterization system was used: a sample holder (probe) with adjacent light tight compartments was connected via fiber-optic cables to a photon counting photomultiplier tube (PMT). One compartment contains a medium only while the other contains medium and red-shifting nano-particles (Q-dots, nanoclusters). By taking the difference between the two signals (Cerenkov photons and CRET photons) we obtain a measure of the down-converted light, which we expect to be proportional to dose as measured with an adjacent ion chamber. Experimental results are compared to Monte Carlo simulations performed using the GEANT4 code. Results: The signal correlation between CR signal, CRET readings and dose produced by LINAC at a single point were investigated. The experimental results were compared with simulations. The dose linearity, signal to noise ratio and dose rate dependence were tested with custom build CRET based probe. Conclusion: Performance characteristics of the proposed single point CRET based probe were evaluated. The direct use of the induced Cerenkov emission and CRET in an irradiated single point volume as an indirect surrogate for the imparted dose was investigated. We conclude that CRET is a promising optical based dosimetry method that offers advantages over those already proposed.« less

  7. Reference Dosimetry according to the New German Protocol DIN 6800-2 and Comparison with IAEA TRS 398 and AAPM TG 51*

    PubMed Central

    Zakaria, A; Schuette, W; Younan, C

    2011-01-01

    The preceding DIN 6800-2 (1997) protocol has been revised by a German task group and its latest version was published in March 2008 as the national standard dosimetry protocol DIN 6800-2 (2008 March). Since then, in Germany the determination of absorbed dose to water for high-energy photon and electron beams has to be performed according to this new German dosimetry protocol. The IAEA Code of Practice TRS 398 (2000) and the AAPM TG-51 are the two main protocols applied internationally. The new German version has widely adapted the methodology and dosimetric data of TRS-398. This paper investigates systematically the DIN 6800-2 protocol and compares it with the procedures and results obtained by using the international protocols. The investigation was performed with 6 MV and 18 MV photon beams as well as with electron beams from 5 MeV to 21 MeV. While only cylindrical chambers were used for photon beams, the measurements of electron beams were performed by using cylindrical and plane-parallel chambers. It was found that the discrepancies in the determination of absorbed dose to water among the three protocols were 0.23% for photon beams and 1.2% for electron beams. The determination of water absorbed dose was also checked by a national audit procedure using TLDs. The comparison between the measurements following the DIN 6800-2 protocol and the TLD audit-procedure confirmed a difference of less than 2%. The advantage of the new German protocol DIN 6800-2 lies in the renouncement on the cross calibration procedure as well as its clear presentation of formulas and parameters. In the past, the different protocols evoluted differently from time to time. Fortunately today, a good convergence has been obtained in concepts and methods. PMID:22287987

  8. Reference Dosimetry according to the New German Protocol DIN 6800-2 and Comparison with IAEA TRS 398 and AAPM TG 51.

    PubMed

    Zakaria, A; Schuette, W; Younan, C

    2011-04-01

    The preceding DIN 6800-2 (1997) protocol has been revised by a German task group and its latest version was published in March 2008 as the national standard dosimetry protocol DIN 6800-2 (2008 March). Since then, in Germany the determination of absorbed dose to water for high-energy photon and electron beams has to be performed according to this new German dosimetry protocol. The IAEA Code of Practice TRS 398 (2000) and the AAPM TG-51 are the two main protocols applied internationally. The new German version has widely adapted the methodology and dosimetric data of TRS-398. This paper investigates systematically the DIN 6800-2 protocol and compares it with the procedures and results obtained by using the international protocols. The investigation was performed with 6 MV and 18 MV photon beams as well as with electron beams from 5 MeV to 21 MeV. While only cylindrical chambers were used for photon beams, the measurements of electron beams were performed by using cylindrical and plane-parallel chambers. It was found that the discrepancies in the determination of absorbed dose to water among the three protocols were 0.23% for photon beams and 1.2% for electron beams. The determination of water absorbed dose was also checked by a national audit procedure using TLDs. The comparison between the measurements following the DIN 6800-2 protocol and the TLD audit-procedure confirmed a difference of less than 2%. The advantage of the new German protocol DIN 6800-2 lies in the renouncement on the cross calibration procedure as well as its clear presentation of formulas and parameters. In the past, the different protocols evoluted differently from time to time. Fortunately today, a good convergence has been obtained in concepts and methods.

  9. In vitro Dosimetric Study of Biliary Stent Loaded with Radioactive 125I Seeds

    PubMed Central

    Yao, Li-Hong; Wang, Jun-Jie; Shang, Charles; Jiang, Ping; Lin, Lei; Sun, Hai-Tao; Liu, Lu; Liu, Hao; He, Di; Yang, Rui-Jie

    2017-01-01

    Background: A novel radioactive 125I seed-loaded biliary stent has been used for patients with malignant biliary obstruction. However, the dosimetric characteristics of the stents remain unclear. Therefore, we aimed to describe the dosimetry of the stents of different lengths — with different number as well as activities of 125I seeds. Methods: The radiation dosimetry of three representative radioactive stent models was evaluated using a treatment planning system (TPS), thermoluminescent dosimeter (TLD) measurements, and Monte Carlo (MC) simulations. In the process of TPS calculation and TLD measurement, two different water-equivalent phantoms were designed to obtain cumulative radial dose distribution. Calibration procedures using TLD in the designed phantom were also conducted. MC simulations were performed using the Monte Carlo N-Particle eXtended version 2.5 general purpose code to calculate the radioactive stent's three-dimensional dose rate distribution in liquid water. Analysis of covariance was used to examine the factors influencing radial dose distribution of the radioactive stent. Results: The maximum reduction in cumulative radial dose was 26% when the seed activity changed from 0.5 mCi to 0.4 mCi for the same length of radioactive stents. The TLD's dose response in the range of 0–10 mGy irradiation by 137Cs γ-ray was linear: y = 182225x − 6651.9 (R2= 0.99152; y is the irradiation dose in mGy, x is the TLDs’ reading in nC). When TLDs were irradiated by different energy radiation sources to a dose of 1 mGy, reading of TLDs was different. Doses at a distance of 0.1 cm from the three stents’ surface simulated by MC were 79, 93, and 97 Gy. Conclusions: TPS calculation, TLD measurement, and MC simulation were performed and were found to be in good agreement. Although the whole experiment was conducted in water-equivalent phantom, data in our evaluation may provide a theoretical basis for dosimetry for the clinical application. PMID:28469106

  10. SU-F-T-54: Determination of the AAPM TG-43 Brachytherapy Dosimetry Parameters for A New Titanium-Encapsulated Yb-169 Source by Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynoso, F; Washington University School of Medicine, St. Louis, MO; Munro, J

    2016-06-15

    Purpose: To determine the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source designed to maximize the dose enhancement during gold nanoparticle-aided radiation therapy (GNRT). Methods: An existing Monte Carlo (MC) model of the titanium-encapsulated Yb-169 source, which was described in the current investigators’ published MC optimization study, was modified based on the source manufacturer’s detailed specifications, resulting in an accurate model of the titanium-encapsulated Yb-169 source that was actually manufactured. MC calculations were then performed using the MCNP5 code system and the modified source model, in order to obtain a complete set of the AAPM TG-43 parametersmore » for the new Yb-169 source. Results: The MC-calculated dose rate constant for the new titanium-encapsulated Yb-169 source was 1.05 ± 0.03 cGy per hr U, indicating about 10% decrease from the values reported for the conventional stainless steel-encapsulated Yb-169 sources. The source anisotropy and radial dose function for the new source were found similar to those reported for the conventional Yb-169 sources. Conclusion: In this study, the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source were determined by MC calculations. The current results suggested that the use of titanium, instead of stainless steel, to encapsulate the Yb-169 core would not lead to any major change in the dosimetric characteristics of the Yb-169 source, while it would allow more low energy photons being transmitted through the source filter thereby leading to an increased dose enhancement during GNRT. Supported by DOD/PCRP grant W81XWH-12-1-0198 This investigation was supported by DOD/PCRP grant W81XWH-12-1- 0198.« less

  11. Gold nanoparticle‐based brachytherapy enhancement in choroidal melanoma using a full Monte Carlo model of the human eye

    PubMed Central

    Vaez‐zadeh, Mehdi; Masoudi, S. Farhad; Rahmani, Faezeh; Knaup, Courtney; Meigooni, Ali S.

    2015-01-01

    The effects of gold nanoparticles (GNPs) in 125I brachytherapy dose enhancement on choroidal melanoma are examined using the Monte Carlo simulation technique. Usually, Monte Carlo ophthalmic brachytherapy dosimetry is performed in a water phantom. However, here, the compositions of human eye have been considered instead of water. Both human eye and water phantoms have been simulated with MCNP5 code. These simulations were performed for a fully loaded 16 mm COMS eye plaque containing 13 125I seeds. The dose delivered to the tumor and normal tissues have been calculated in both phantoms with and without GNPs. Normally, the radiation therapy of cancer patients is designed to deliver a required dose to the tumor while sparing the surrounding normal tissues. However, as the normal and cancerous cells absorbed dose in an almost identical fashion, the normal tissue absorbed radiation dose during the treatment time. The use of GNPs in combination with radiotherapy in the treatment of tumor decreases the absorbed dose by normal tissues. The results indicate that the dose to the tumor in an eyeball implanted with COMS plaque increases with increasing GNPs concentration inside the target. Therefore, the required irradiation time for the tumors in the eye is decreased by adding the GNPs prior to treatment. As a result, the dose to normal tissues decreases when the irradiation time is reduced. Furthermore, a comparison between the simulated data in an eye phantom made of water and eye phantom made of human eye composition, in the presence of GNPs, shows the significance of utilizing the composition of eye in ophthalmic brachytherapy dosimetry Also, defining the eye composition instead of water leads to more accurate calculations of GNPs radiation effects in ophthalmic brachytherapy dosimetry. PACS number: 87.53.Jw, 87.85.Rs, 87.10.Rt PMID:26699318

  12. Trends in gel dosimetry: Preliminary bibliometric overview of active growth areas, research trends and hot topics from Gore’s 1984 paper onwards

    NASA Astrophysics Data System (ADS)

    Baldock, C.

    2017-05-01

    John Gore’s seminal 1984 paper on gel dosimetry spawned a vibrant research field ranging from fundamental science through to clinical applications. A preliminary bibliometric study was undertaken of the gel dosimetry family of publications inspired by, and resulting from, Gore’s original 1984 paper to determine active growth areas, research trends and hot topics from Gore’s paper up to and including 2016. Themes and trends of the gel dosimetry research field were bibliometrically explored by way of co-occurrence term maps using the titles and abstracts text corpora from the Web of Science database for all relevant papers from 1984 to 2016. Visualisation of similarities was used by way of the VOSviewer visualisation tool to generate cluster maps of gel dosimetry knowledge domains and the associated citation impact of topics within the domains. Heat maps were then generated to assist in the understanding of active growth areas, research trends, and emerging and hot topics in gel dosimetry.

  13. EURADOS strategic research agenda: vision for dosimetry of ionising radiation.

    PubMed

    Rühm, W; Fantuzzi, E; Harrison, R; Schuhmacher, H; Vanhavere, F; Alves, J; Bottollier Depois, J F; Fattibene, P; Knežević, Ž; Lopez, M A; Mayer, S; Miljanić, S; Neumaier, S; Olko, P; Stadtmann, H; Tanner, R; Woda, C

    2016-02-01

    Since autumn 2012, the European Radiation Dosimetry Group (EURADOS) has been developing its Strategic Research Agenda (SRA), which is intended to contribute to the identification of future research needs in radiation dosimetry in Europe. The present article summarises-based on input from EURADOS Working Groups (WGs) and Voting Members-five visions in dosimetry and defines key issues in dosimetry research that are considered important for the next decades. The five visions include scientific developments required towards (a) updated fundamental dose concepts and quantities, (b) improved radiation risk estimates deduced from epidemiological cohorts, (c) efficient dose assessment for radiological emergencies, (d) integrated personalised dosimetry in medical applications and (e) improved radiation protection of workers and the public. The SRA of EURADOS will be used as a guideline for future activities of the EURADOS WGs. A detailed version of the SRA can be downloaded as a EURADOS report from the EURADOS website (www.eurados.org). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Reviewing three dimensional dosimetry: basics and utilization as presented over 17 Years of DosGel and IC3Ddose

    NASA Astrophysics Data System (ADS)

    Schreiner, L. J.

    2017-05-01

    For seventeen years a community of basic and clinical scientists and researchers has been meeting bi-annually to promote the clinical advance of techniques to measure radiation dose in three dimensions. The interest in this dosimetry was motivated by its promise as an effective methodology for 3D measurement of the complex conformal dose distributions achieved by modern techniques such as Intensity Modulated and Volumetric Arc Radiation Therapy. Each of the International Conferences on 3D Radiation Dosimetry resulted in the publication of informative proceedings [1-8], the majority openly available on the internet. The proceedings included papers that: i) reviewed the basic science of the radiation sensitive materials used to accumulate the dose information, ii) introduced the science and engineering of the imaging systems required to read the information out, iii) described the work flows and systems required for efficient dosimetry, iv) reported the protocols required for reproducible dosimetry, and v) showed examples of clinical use illustrating advantage and limitations of the dosimetry. This paper is intended to use the framework provided by these proceedings to review the current 3D chemical dosimeters available and to discuss the requirements for their use. The paper describes how 3D dosimetry can complement other dose delivery validation approaches available in the clinic. It closes with some personal reflections of how the motivation for, and practice of, 3D dosimetry have changed (or not) over the years.

  15. Dosimetry of ionising radiation in modern radiation oncology

    NASA Astrophysics Data System (ADS)

    Kron, Tomas; Lehmann, Joerg; Greer, Peter B.

    2016-07-01

    Dosimetry of ionising radiation is a well-established and mature branch of physical sciences with many applications in medicine and biology. In particular radiotherapy relies on dosimetry for optimisation of cancer treatment and avoidance of severe toxicity for patients. Several novel developments in radiotherapy have introduced new challenges for dosimetry with small and dynamically changing radiation fields being central to many of these applications such as stereotactic ablative body radiotherapy and intensity modulated radiation therapy. There is also an increasing awareness of low doses given to structures not in the target region and the associated risk of secondary cancer induction. Here accurate dosimetry is important not only for treatment optimisation but also for the generation of data that can inform radiation protection approaches in the future. The article introduces some of the challenges and highlights the interdependence of dosimetric calculations and measurements. Dosimetric concepts are explored in the context of six application fields: reference dosimetry, small fields, low dose out of field, in vivo dosimetry, brachytherapy and auditing of radiotherapy practice. Recent developments of dosimeters that can be used for these purposes are discussed using spatial resolution and number of dimensions for measurement as sorting criteria. While dosimetry is ever evolving to address the needs of advancing applications of radiation in medicine two fundamental issues remain: the accuracy of the measurement from a scientific perspective and the importance to link the measurement to a clinically relevant question. This review aims to provide an update on both of these.

  16. High-fidelity large eddy simulation for supersonic jet noise prediction

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt M.

    The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the wall model treatment are then utilized to simulate military-style nozzles with and without beveling of the nozzle exit plane. Experiments of beveled converging-diverging nozzles have found reduced noise levels for some observer locations. Predicting the noise for these geometries provides a good initial test of the overall methodology for a more complex nozzle. The jet flowfield and acoustic data are analyzed and compared to similar experiments and excellent agreement is found. Potential areas of improvement are discussed for future research.

  17. Consequence modeling using the fire dynamics simulator.

    PubMed

    Ryder, Noah L; Sutula, Jason A; Schemel, Christopher F; Hamer, Andrew J; Van Brunt, Vincent

    2004-11-11

    The use of Computational Fluid Dynamics (CFD) and in particular Large Eddy Simulation (LES) codes to model fires provides an efficient tool for the prediction of large-scale effects that include plume characteristics, combustion product dispersion, and heat effects to adjacent objects. This paper illustrates the strengths of the Fire Dynamics Simulator (FDS), an LES code developed by the National Institute of Standards and Technology (NIST), through several small and large-scale validation runs and process safety applications. The paper presents two fire experiments--a small room fire and a large (15 m diameter) pool fire. The model results are compared to experimental data and demonstrate good agreement between the models and data. The validation work is then extended to demonstrate applicability to process safety concerns by detailing a model of a tank farm fire and a model of the ignition of a gaseous fuel in a confined space. In this simulation, a room was filled with propane, given time to disperse, and was then ignited. The model yields accurate results of the dispersion of the gas throughout the space. This information can be used to determine flammability and explosive limits in a space and can be used in subsequent models to determine the pressure and temperature waves that would result from an explosion. The model dispersion results were compared to an experiment performed by Factory Mutual. Using the above examples, this paper will demonstrate that FDS is ideally suited to build realistic models of process geometries in which large scale explosion and fire failure risks can be evaluated with several distinct advantages over more traditional CFD codes. Namely transient solutions to fire and explosion growth can be produced with less sophisticated hardware (lower cost) than needed for traditional CFD codes (PC type computer verses UNIX workstation) and can be solved for longer time histories (on the order of hundreds of seconds of computed time) with minimal computer resources and length of model run. Additionally results that are produced can be analyzed, viewed, and tabulated during and following a model run within a PC environment. There are some tradeoffs, however, as rapid computations in PC's may require a sacrifice in the grid resolution or in the sub-grid modeling, depending on the size of the geometry modeled.

  18. Les soins aux enfants et aux adolescents des familles des militaires canadiens : les considérations particulières

    PubMed Central

    Rowan-Legg, Anne

    2017-01-01

    Résumé Les familles des militaires font face à de nombreux facteurs de stress, tels que les réinstallations fréquentes, les longues pério des de séparation familiale, l’isolement géographique du réseau de soutien de la famille élargie et le déploiement en zones très dangereuses. Les enfants et les adolescents des familles des militaires vivent les mêmes trajectoires développementales et motivationnelles que leurs homologues civils, mais ils sont également aux prises avec des pressions et des facteurs de stress liés à leur développement qui sont inhabituels et qui leur sont imposés par les exigences de la vie militaire. Les effets de la vie militaire sur les familles et les enfants commencent à être admis et mieux caractérisés. Il est essentiel de comprendre les préoccupations propres aux enfants et aux adolescents des familles des militaires et de mobiliser les ressources nécessaires pour les soutenir afin de répondre à leurs besoins en matière de santé.

  19. The NCS code of practice for the quality assurance and control for volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Mans, Anton; Schuring, Danny; Arends, Mark P.; Vugts, Cornelia A. J. M.; Wolthaus, Jochem W. H.; Lotz, Heidi T.; Admiraal, Marjan; Louwe, Rob J. W.; Öllers, Michel C.; van de Kamer, Jeroen B.

    2016-10-01

    In 2010, the NCS (Netherlands Commission on Radiation Dosimetry) installed a subcommittee to develop guidelines for quality assurance and control for volumetric modulated arc therapy (VMAT) treatments. The report (published in 2015) has been written by Dutch medical physicists and has therefore, inevitably, a Dutch focus. This paper is a condensed version of these guidelines, the full report in English is freely available from the NCS website www.radiationdosimetry.org. After describing the transition from IMRT to VMAT, the paper addresses machine quality assurance (QA) and treatment planning system (TPS) commissioning for VMAT. The final section discusses patient specific QA issues such as the use of class solutions, measurement devices and dose evaluation methods.

  20. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  1. A comparison of two methods of in vivo dosimetry for a high energy neutron beam.

    PubMed

    Blake, S W; Bonnett, D E; Finch, J

    1990-06-01

    Two methods of in vivo dosimetry have been compared in a high energy neutron beam. These were activation dosimetry and thermoluminescence dosimetry (TLD). Their suitability was determined by comparison with estimates of total dose, obtained using a tissue equivalent ionization chamber. Measurements were made on the central axis and a profile of a 10 x 10 cm square field and also behind a shielding block in order to simulate conditions of clinical use. The TLD system was found to provide the best estimate of total dose.

  2. Radiation dosimetry for quality control of food preservation and disinfestation

    NASA Astrophysics Data System (ADS)

    McLaughlin, W. L.; Miller, A.; Uribe, R. M.

    In the use of x and gamma rays and scanned electron beams to extend the shelf life of food by delay of sprouting and ripening, killing of microbes, and control of insect population, quality assurance is provided by standardized radiation dosimetry. By strategic placement of calibrated dosimeters that are sufficiently stable and reproducible, it is possible to monitor minimum and maximum radiation absorbed dose levels and dose uniformity for a given processed foodstuff. The dosimetry procedure is especially important in the commisioning of a process and in making adjustments of process parameters (e.g. conveyor speed) to meet changes that occur in product and source parameters (e.g. bulk density and radiation spectrum). Routine dosimetry methods and certain corrections of dosimetry data may be selected for the radiations used in typical food processes.

  3. International Standardization of the Clinical Dosimetry of Beta Radiation Brachytherapy Sources: Progress of an ISO Standard

    NASA Astrophysics Data System (ADS)

    Soares, Christopher

    2006-03-01

    In 2004 a new work item proposal (NWIP) was accepted by the International Organization for Standardization (ISO) Technical Committee 85 (TC85 -- Nuclear Energy), Subcommittee 2 (Radiation Protection) for the development of a standard for the clinical dosimetry of beta radiation sources used for brachytherapy. To develop this standard, a new Working Group (WG 22 - Ionizing Radiation Dosimetry and Protocols in Medical Applications) was formed. The standard is based on the work of an ad-hoc working group initiated by the Dosimetry task group of the Deutsches Insitiut für Normung (DIN). Initially the work was geared mainly towards the needs of intravascular brachytherapy, but with the decline of this application, more focus has been placed on the challenges of accurate dosimetry for the concave eye plaques used to treat ocular melanoma. Guidance is given for dosimetry formalisms, reference data to be used, calibrations, measurement methods, modeling, uncertainty determinations, treatment planning and reporting, and clinical quality control. The document is currently undergoing review by the ISO member bodies for acceptance as a Committee Draft (CD) with publication of the final standard expected by 2007. There are opportunities for other ISO standards for medical dosimetry within the framework of WG22.

  4. Specific issues in small animal dosimetry and irradiator calibration

    PubMed Central

    Yoshizumi, Terry; Brady, Samuel L.; Robbins, Mike E.; Bourland, J. Daniel

    2013-01-01

    Purpose In response to the increased risk of radiological terrorist attack, a network of Centers for Medical Countermeasures against Radiation (CMCR) has been established in the United States, focusing on evaluating animal model responses to uniform, relatively homogenous whole- or partial-body radiation exposures at relatively high dose rates. The success of such studies is dependent not only on robust animal models but on accurate and reproducible dosimetry within and across CMCR. To address this issue, the Education and Training Core of the Duke University School of Medicine CMCR organised a one-day workshop on small animal dosimetry. Topics included accuracy in animal dosimetry accuracy, characteristics and differences of cesium-137 and X-ray irradiators, methods for dose measurement, and design of experimental irradiation geometries for uniform dose distributions. This paper summarises the information presented and discussed. Conclusions Without ensuring accurate and reproducible dosimetry the development and assessment of the efficacy of putative countermeasures will not prove successful. Radiation physics support is needed, but is often the weakest link in the small animal dosimetry chain. We recommend: (i) A user training program for new irradiator users, (ii) subsequent training updates, and (iii) the establishment of a national small animal dosimetry center for all CMCR members. PMID:21961967

  5. Evaluation of Dosimetry Check software for IMRT patient-specific quality assurance.

    PubMed

    Narayanasamy, Ganesh; Zalman, Travis; Ha, Chul S; Papanikolaou, Niko; Stathakis, Sotirios

    2015-05-08

    The purpose of this study is to evaluate the use of the Dosimetry Check system for patient-specific IMRT QA. Typical QA methods measure the dose in an array dosimeter surrounded by homogenous medium for which the treatment plan has been recomputed. With the Dosimetry Check system, fluence measurements acquired on a portal dosimeter is applied to the patient's CT scans. Instead of making dose comparisons in a plane, Dosimetry Check system produces isodose lines and dose-volume histograms based on the planning CT images. By exporting the dose distribution from the treatment planning system into the Dosimetry Check system, one is able to make a direct comparison between the calculated dose and the planned dose. The versatility of the software is evaluated with respect to the two IMRT techniques - step and shoot and volumetric arc therapy. The system analyzed measurements made using EPID, PTW seven29, and IBA MatriXX, and an intercomparison study was performed. Plans from patients previously treated at our institution with treated anatomical site on brain, head & neck, liver, lung, and prostate were analyzed using Dosimetry Check system for any anatomical site dependence. We have recommendations and possible precautions that may be necessary to ensure proper QA with the Dosimetry Check system.

  6. Monte Carlo modelling the dosimetric effects of electrode material on diamond detectors.

    PubMed

    Baluti, Florentina; Deloar, Hossain M; Lansley, Stuart P; Meyer, Juergen

    2015-03-01

    Diamond detectors for radiation dosimetry were modelled using the EGSnrc Monte Carlo code to investigate the influence of electrode material and detector orientation on the absorbed dose. The small dimensions of the electrode/diamond/electrode detector structure required very thin voxels and the use of non-standard DOSXYZnrc Monte Carlo model parameters. The interface phenomena was investigated by simulating a 6 MV beam and detectors with different electrode materials, namely Al, Ag, Cu and Au, with thickens of 0.1 µm for the electrodes and 0.1 mm for the diamond, in both perpendicular and parallel detector orientation with regards to the incident beam. The smallest perturbations were observed for the parallel detector orientation and Al electrodes (Z = 13). In summary, EGSnrc Monte Carlo code is well suited for modelling small detector geometries. The Monte Carlo model developed is a useful tool to investigate the dosimetric effects caused by different electrode materials. To minimise perturbations cause by the detector electrodes, it is recommended that the electrodes should be made from a low-atomic number material and placed parallel to the beam direction.

  7. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-09-01

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  8. Dose estimation for astronauts using dose conversion coefficients calculated with the PHITS code and the ICRP/ICRU adult reference computational phantoms.

    PubMed

    Sato, Tatsuhiko; Endo, Akira; Sihver, Lembit; Niita, Koji

    2011-03-01

    Absorbed-dose and dose-equivalent rates for astronauts were estimated by multiplying fluence-to-dose conversion coefficients in the units of Gy.cm(2) and Sv.cm(2), respectively, and cosmic-ray fluxes around spacecrafts in the unit of cm(-2) s(-1). The dose conversion coefficients employed in the calculation were evaluated using the general-purpose particle and heavy ion transport code system PHITS coupled to the male and female adult reference computational phantoms, which were released as a common ICRP/ICRU publication. The cosmic-ray fluxes inside and near to spacecrafts were also calculated by PHITS, using simplified geometries. The accuracy of the obtained absorbed-dose and dose-equivalent rates was verified by various experimental data measured both inside and outside spacecrafts. The calculations quantitatively show that the effective doses for astronauts are significantly greater than their corresponding effective dose equivalents, because of the numerical incompatibility between the radiation quality factors and the radiation weighting factors. These results demonstrate the usefulness of dose conversion coefficients in space dosimetry. © Springer-Verlag 2010

  9. Determining the mass attenuation coefficient, effective atomic number, and electron density of raw wood and binderless particleboards of Rhizophora spp. by using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Marashdeh, Mohammad W.; Al-Hamarneh, Ibrahim F.; Abdel Munem, Eid M.; Tajuddin, A. A.; Ariffin, Alawiah; Al-Omari, Saleh

    Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff) and effective electron density (Neff) of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10-60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5) simulation code. The MCNP5 calculations of the attenuation parameters for the Rhizophora spp. samples were plotted graphically against photon energy and discussed in terms of their relative differences compared with those of water and breast tissue. Moreover, the validity of the MCNP5 code was examined by comparing the calculated attenuation parameters with the theoretical values obtained by the XCOM program based on the mixture rule. The results indicated that the MCNP5 process can be followed to determine the attenuation of gamma rays with several photon energies in other materials.

  10. Recherches sur l'histoire de l'astronomie ancienne

    NASA Astrophysics Data System (ADS)

    Tannery, Paul

    2015-04-01

    Préface; 1. Ce que les Hellènes ont appelé astronomie; 2. Ce que les Hellènes ont appelé astrologie (cont.); 3. Les mathématiciens alexandrins; 4. Les postulats de l'astronomie d'après Ptlolémée et les auteurs élémentaires; 5. La sphéricité de la terre et la mesure de sa circonférence; 6. Le mouvement général des planètes; 7. Les cercles de la sphère; 8. La longueur de l'année solaire; 9. Les tables du soleil; 10. Les périodes d'Hipparque pour les mouvements lunaires; 11. Les tables de la lune; 12. Les parallaxes du soleil et de la lune; 13. Les prédictions d'éclipses; 14. La théorie des planètes; 15. Le catalogue des fixes; Appendice; Errata.

  11. Precise optical dosimetry in low-level laser therapy of soft tissues in oral cavity

    NASA Astrophysics Data System (ADS)

    Stoykova, Elena V.; Sabotinov, O.

    2004-06-01

    The new low level laser therapy (LLLT) is widely applied for treatment of diseases of the oral mucosa and parodont. Depending on indication, different optical tips and light-guides are used to create beams with a required shape. However, to the best of our knowledge, the developed irradiation geometries are usually proposed assuming validity of Bouger-Lambert law. This hardly corresponds to the real situation because of the dominating multiple scattering within 600-1200 nm range that destroys correlation between the emitted laser beam and the spatial distribution of the absorbed dose inside the tissue. The aim of this work is to base the dosimetry of the LLLT procedures of periodontal tissues on radiation transfer theory using a flexible Monte-Carlo code. We studied quantitatively the influence of tissue optical parameters (absorption and scattering coefficients, tissue refraction index, anisotropy factor) on decreasing of correlation between the emitted beam and the energy deposition for converging or diverging beams. We evaluated energy deposition for the developed by us LLLT system in a 3-D model of periodontal tissues created using a cross-sectional image of this region with internal structural information on the gingival and the tooth. The laser source is a CW diode laser emitting elliptical beam within 650-675 nm at output power 5-30 mW. To determine the geometry of the irradiating beam we used CCD camera Spiricon LBA 300.

  12. Numerical Analysis of Organ Doses Delivered During Computed Tomography Examinations Using Japanese Adult Phantoms with the WAZA-ARI Dosimetry System.

    PubMed

    Takahashi, Fumiaki; Sato, Kaoru; Endo, Akira; Ono, Koji; Ban, Nobuhiko; Hasegawa, Takayuki; Katsunuma, Yasushi; Yoshitake, Takayasu; Kai, Michiaki

    2015-08-01

    A dosimetry system for computed tomography (CT) examinations, named WAZA-ARI, is being developed to accurately assess radiation doses to patients in Japan. For dose calculations in WAZA-ARI, organ doses were numerically analyzed using average adult Japanese male (JM) and female (JF) phantoms with the Particle and Heavy Ion Transport code System (PHITS). Experimental studies clarified the photon energy distribution of emitted photons and dose profiles on the table for some multi-detector row CT (MDCT) devices. Numerical analyses using a source model in PHITS could specifically take into account emissions of x rays from the tube to the table with attenuation of photons through a beam-shaping filter for each MDCT device based on the experiment results. The source model was validated by measuring the CT dose index (CTDI). Numerical analyses with PHITS revealed a concordance of organ doses with body sizes of the JM and JF phantoms. The organ doses in the JM phantoms were compared with data obtained using previously developed systems. In addition, the dose calculations in WAZA-ARI were verified with previously reported results by realistic NUBAS phantoms and radiation dose measurement using a physical Japanese model (THRA1 phantom). The results imply that numerical analyses using the Japanese phantoms and specified source models can give reasonable estimates of dose for MDCT devices for typical Japanese adults.

  13. Cancer Risk Assessment for Space Radiation

    NASA Technical Reports Server (NTRS)

    Richmond, Robert C.; Cruz, Angela; Bors, Karen; Curreri, Peter A. (Technical Monitor)

    2001-01-01

    Predicting the occurrence of human cancer following exposure to any agent causing genetic damage is a difficult task. This is because the uncertainty of uniform exposure to the damaging agent, and the uncertainty of uniform processing of that damage within a complex set of biological variables, degrade the confidence of predicting the delayed expression of cancer as a relatively rare event within any given clinically normal individual. The radiation health research priorities for enabling long-duration human exploration of space were established in the 1996 NRC Report entitled 'Radiation Hazards to Crews of Interplanetary Missions: Biological Issues and Research Strategies'. This report emphasized that a 15-fold uncertainty in predicting radiation-induced cancer incidence must be reduced before NASA can commit humans to extended interplanetary missions. That report concluded that the great majority of this uncertainty is biologically based, while a minority is physically based due to uncertainties in radiation dosimetry and radiation transport codes. Since that report, the biologically based uncertainty has remained large, and the relatively small uncertainty associated with radiation dosimetry has increased due to the considerations raised by concepts of microdosimetry. In a practical sense, however, the additional uncertainties introduced by microdosimetry are encouraging since they are in a direction of lowered effective dose absorbed through infrequent interactions of any given cell with the high energy particle component of space radiation. Additional information is contained in the original extended abstract.

  14. Radiological assessment. A textbook on environmental dose analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Till, J.E.; Meyer, H.R.

    1983-09-01

    Radiological assessment is the quantitative process of estimating the consequences to humans resulting from the release of radionuclides to the biosphere. It is a multidisciplinary subject requiring the expertise of a number of individuals in order to predict source terms, describe environmental transport, calculate internal and external dose, and extrapolate dose to health effects. Up to this time there has been available no comprehensive book describing, on a uniform and comprehensive level, the techniques and models used in radiological assessment. Radiological Assessment is based on material presented at the 1980 Health Physics Society Summer School held in Seattle, Washington. Themore » material has been expanded and edited to make it comprehensive in scope and useful as a text. Topics covered include (1) source terms for nuclear facilities and Medical and Industrial sites; (2) transport of radionuclides in the atmosphere; (3) transport of radionuclides in surface waters; (4) transport of radionuclides in groundwater; (5) terrestrial and aquatic food chain pathways; (6) reference man; a system for internal dose calculations; (7) internal dosimetry; (8) external dosimetry; (9) models for special-case radionuclides; (10) calculation of health effects in irradiated populations; (11) evaluation of uncertainties in environmental radiological assessment models; (12) regulatory standards for environmental releases of radionuclides; (13) development of computer codes for radiological assessment; and (14) assessment of accidental releases of radionuclides.« less

  15. The influence of Monte Carlo source parameters on detector design and dose perturbation in small field dosimetry

    NASA Astrophysics Data System (ADS)

    Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2014-03-01

    To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

  16. Monte Carlo calculations of the cellular S-values for α-particle-emitting radionuclides incorporated into the nuclei of cancer cells of the MDA-MB231, MCF7 and PC3 lines.

    PubMed

    Rojas-Calderón, E L; Ávila, O; Ferro-Flores, G

    2018-05-01

    S-values (dose per unit of cumulated activity) for alpha particle-emitting radionuclides and monoenergetic alpha sources placed in the nuclei of three cancer cell models (MCF7, MDA-MB231 breast cancer cells and PC3 prostate cancer cells) were obtained by Monte Carlo simulation. The MCNPX code was used to calculate the fraction of energy deposited in the subcellular compartments due to the alpha sources in order to obtain the S-values. A comparison with internationally accepted S-values reported by the MIRD Cellular Committee for alpha sources in three sizes of spherical cells was also performed leading to an agreement within 4% when an alpha extended source uniformly distributed in the nucleus is simulated. This result allowed to apply the Monte Carlo Methodology to evaluate S-values for alpha particles in cancer cells. The calculation of S-values for nucleus, cytoplasm and membrane of cancer cells considering their particular geometry, distribution of the radionuclide source and chemical composition by means of Monte Carlo simulation provides a good approach for dosimetry assessment of alpha emitters inside cancer cells. Results from this work provide information and tools that may help researchers in the selection of appropriate radiopharmaceuticals in alpha-targeted cancer therapy and improve its dosimetry evaluation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Parametric Study of Decay of Homogeneous Isotropic Turbulence Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rumsey, Christopher L.; Rubinstein, Robert; Balakumar, Ponnampalam; Zang, Thomas A.

    2012-01-01

    Numerical simulations of decaying homogeneous isotropic turbulence are performed with both low-order and high-order spatial discretization schemes. The turbulent Mach and Reynolds numbers for the simulations are 0.2 and 250, respectively. For the low-order schemes we use either second-order central or third-order upwind biased differencing. For higher order approximations we apply weighted essentially non-oscillatory (WENO) schemes, both with linear and nonlinear weights. There are two objectives in this preliminary effort to investigate possible schemes for large eddy simulation (LES). One is to explore the capability of a widely used low-order computational fluid dynamics (CFD) code to perform LES computations. The other is to determine the effect of higher order accuracy (fifth, seventh, and ninth order) achieved with high-order upwind biased WENO-based schemes. Turbulence statistics, such as kinetic energy, dissipation, and skewness, along with the energy spectra from simulations of the decaying turbulence problem are used to assess and compare the various numerical schemes. In addition, results from the best performing schemes are compared with those from a spectral scheme. The effects of grid density, ranging from 32 cubed to 192 cubed, on the computations are also examined. The fifth-order WENO-based scheme is found to be too dissipative, especially on the coarser grids. However, with the seventh-order and ninth-order WENO-based schemes we observe a significant improvement in accuracy relative to the lower order LES schemes, as revealed by the computed peak in the energy dissipation and by the energy spectrum.

  18. History of dose specification in Brachytherapy: From Threshold Erythema Dose to Computational Dosimetry

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey F.

    2006-09-01

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as a means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.

  19. Electron paramagnetic resonance (EPR) dosimetry using lithium formate in radiotherapy: comparison with thermoluminescence (TL) dosimetry using lithium fluoride rods.

    PubMed

    Vestad, Tor Arne; Malinen, Eirik; Olsen, Dag Rune; Hole, Eli Olaug; Sagstuen, Einar

    2004-10-21

    Solid-state radiation dosimetry by electron paramagnetic resonance (EPR) spectroscopy and thermoluminescence (TL) was utilized for the determination of absorbed doses in the range of 0.5-2.5 Gy. The dosimeter materials used were lithium formate and lithium fluoride (TLD-100 rods) for EPR dosimetry and TL dosimetry, respectively. 60Co gamma-rays and 4, 6, 10 and 15 MV x-rays were employed. The main objectives were to compare the variation in dosimeter reading of the respective dosimetry systems and to determine the photon energy dependence of the two dosimeter materials. The EPR dosimeter sensitivity was constant over the dose range in question, while the TL sensitivity increased by more than 5% from 0.5 to 2.5 Gy, thus displaying a supralinear dose response. The average relative standard deviation in the dosimeter reading per dose was 3.0% and 1.2% for the EPR and TL procedures, respectively. For EPR dosimeters, the relative standard deviation declined significantly from 4.3% to 1.1% over the dose range in question. The dose-to-water energy response for the megavoltage x-ray beams relative to 60Co gamma-rays was in the range of 0.990-0.979 and 0.984-0.962 for lithium formate and lithium fluoride, respectively. The results show that EPR dosimetry with lithium formate provides dose estimates with a precision comparable to that of TL dosimetry (using lithium fluoride) for doses above 2 Gy, and that lithium formate is slightly less dependent on megavoltage photon beam energy than lithium fluoride.

  20. Electron paramagnetic resonance (EPR) dosimetry using lithium formate in radiotherapy: comparison with thermoluminescence (TL) dosimetry using lithium fluoride rods

    NASA Astrophysics Data System (ADS)

    Vestad, Tor Arne; Malinen, Eirik; Rune Olsen, Dag; Olaug Hole, Eli; Sagstuen, Einar

    2004-10-01

    Solid-state radiation dosimetry by electron paramagnetic resonance (EPR) spectroscopy and thermoluminescence (TL) was utilized for the determination of absorbed doses in the range of 0.5-2.5 Gy. The dosimeter materials used were lithium formate and lithium fluoride (TLD-100 rods) for EPR dosimetry and TL dosimetry, respectively. 60Co ggr-rays and 4, 6, 10 and 15 MV x-rays were employed. The main objectives were to compare the variation in dosimeter reading of the respective dosimetry systems and to determine the photon energy dependence of the two dosimeter materials. The EPR dosimeter sensitivity was constant over the dose range in question, while the TL sensitivity increased by more than 5% from 0.5 to 2.5 Gy, thus displaying a supralinear dose response. The average relative standard deviation in the dosimeter reading per dose was 3.0% and 1.2% for the EPR and TL procedures, respectively. For EPR dosimeters, the relative standard deviation declined significantly from 4.3% to 1.1% over the dose range in question. The dose-to-water energy response for the megavoltage x-ray beams relative to 60Co ggr-rays was in the range of 0.990-0.979 and 0.984-0.962 for lithium formate and lithium fluoride, respectively. The results show that EPR dosimetry with lithium formate provides dose estimates with a precision comparable to that of TL dosimetry (using lithium fluoride) for doses above 2 Gy, and that lithium formate is slightly less dependent on megavoltage photon beam energy than lithium fluoride.

  1. Innovation and the future of advanced dosimetry: 2D to 5D

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2017-05-01

    Recent years have witnessed a remarkable evolution in the techniques, capabilities and applications of 3D dosimetry. Initially the goal was simple: to innovate new techniques capable of comprehensively measuring and verifying exquisitely intricate dose distributions from a paradigm changing emerging new therapy, IMRT. Basic questions emerged: how well were treatment planning systems modelling the complex delivery, and how could treatments be verified for safe use on patients? Since that time, equally significant leaps of innovation have continued in the technology of treatment delivery. In addition, clinical practice has been transformed by the addition of on-board imaging capabilities, which tend to hypo-fractionation strategies and margin reduction. The net result is a high stakes treatment setting where the clinical morbidity of any unintended treatment deviation is exacerbated by the combination of highly conformal dose distributions given with reduced margins with fractionation regimens unfriendly to healthy tissue. Not surprisingly this scenario is replete with challenges and opportunities for new and improved dosimetry systems. In particular tremendous interest exists in comprehensive 3D dosimetry systems, and systems that can resolve the dose in moving structures (4D) and even in deforming structures (5D). Despite significant progress in the capability of multi-dimensional dosimetry systems, it is striking that true 3D dosimetry systems are today largely found in academic institutions or specialist clinics. The reasons will be explored. We will highlight innovations occurring both in treatment delivery and in advanced dosimetry methods designed to verify them, and explore current and future opportunities for advanced dosimetry tools in clinical practice and translational research.

  2. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamson, Justus; Newton, Joseph; Yang Yun

    2012-07-15

    Purpose: To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T and O) applicator using Monte Carlo calculation and 3D dosimetry. Methods: For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 Multiplication-Sign 10{sup 9} photon histories from a {supmore » 137}Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for {sup 137}Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE{sup Registered-Sign} dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5-8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5 Degree-Sign increments, and a 3D distribution was reconstructed with a (0.05 cm){sup 3} isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T and O implant plan. Results: The systematic difference in bucket angle relative to the nominal ovoid angle (105 Degree-Sign ) was 3.1 Degree-Sign -4.7 Degree-Sign . A systematic difference in bucket angle of 1 Degree-Sign , 5 Degree-Sign , and 10 Degree-Sign caused a 1%{+-} 0.1%, 1.7%{+-} 0.4%, and 2.6%{+-} 0.7% increase in rectal dose, respectively, with smaller effect to dose to Point A, bladder, sigmoid, and bowel. For 3D dosimetry, 90.6% of voxels had a 3D {gamma}-index (criteria = 0.1 cm, 3% local signal) below 1.0 when comparing measured and expected dose about the unshielded source. Dose transmission through the gold shielding at a radial distance of 1 cm was 85.9%{+-} 0.2%, 83.4%{+-} 0.7%, and 82.5%{+-} 2.2% for Monte Carlo, and measurement for left and right buckets, respectively. Dose transmission was lowest at oblique angles from the bucket with a minimum of 56.7%{+-} 0.8%, 65.6%{+-} 1.7%, and 57.5%{+-} 1.6%, respectively. For a clinical T and O plan, attenuation from the buckets leads to a decrease in average Point A dose of {approx}3.2% and decrease in D{sub 2cc} to bladder, rectum, bowel, and sigmoid of 5%, 18%, 6%, and 12%, respectively. Conclusions: Differences between dummy and afterloading bucket position in the ovoids is minor compared to effects from asymmetric ovoid shielding, for which rectal dose is most affected. 3D dosimetry can fulfill a novel role in verifying Monte Carlo calculations of complex dose distributions as are common about brachytherapy sources and applicators.« less

  3. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry.

    PubMed

    Adamson, Justus; Newton, Joseph; Yang, Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana

    2012-07-01

    To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T&O) applicator using Monte Carlo calculation and 3D dosimetry. For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 × 10(9) photon histories from a (137)Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for (137)Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE(®) dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5-8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5° increments, and a 3D distribution was reconstructed with a (0.05 cm)(3) isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T&O implant plan. The systematic difference in bucket angle relative to the nominal ovoid angle (105°) was 3.1°-4.7°. A systematic difference in bucket angle of 1°, 5°, and 10° caused a 1% ± 0.1%, 1.7% ± 0.4%, and 2.6% ± 0.7% increase in rectal dose, respectively, with smaller effect to dose to Point A, bladder, sigmoid, and bowel. For 3D dosimetry, 90.6% of voxels had a 3D γ-index (criteria = 0.1 cm, 3% local signal) below 1.0 when comparing measured and expected dose about the unshielded source. Dose transmission through the gold shielding at a radial distance of 1 cm was 85.9% ± 0.2%, 83.4% ± 0.7%, and 82.5% ± 2.2% for Monte Carlo, and measurement for left and right buckets, respectively. Dose transmission was lowest at oblique angles from the bucket with a minimum of 56.7% ± 0.8%, 65.6% ± 1.7%, and 57.5% ± 1.6%, respectively. For a clinical T&O plan, attenuation from the buckets leads to a decrease in average Point A dose of ∼3.2% and decrease in D(2cc) to bladder, rectum, bowel, and sigmoid of 5%, 18%, 6%, and 12%, respectively. Differences between dummy and afterloading bucket position in the ovoids is minor compared to effects from asymmetric ovoid shielding, for which rectal dose is most affected. 3D dosimetry can fulfill a novel role in verifying Monte Carlo calculations of complex dose distributions as are common about brachytherapy sources and applicators.

  4. STATUS REPORT: EVIDENCE BASED ADVANCES IN INHALATION DOSIMETRY FOR GASES WITH EFFECTS IN THE LOWER RESPIRATORY TRACT AND IN THE BODY

    EPA Science Inventory

    This report summarizes the status of specific inhalation dosimetry procedures for gases as outlined in U.S. EPA’s 1994 Methods for Derivation of Inhalation Reference Concentrations and Applications of Inhalation Dosimetry (U.S. EPA 1994) and reviews recent scientific advances in...

  5. Critical dosimetry measures and surrogate tools that can facilitate clinical success in PDT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pogue, Brian W.; Davis, Scott C.; Kanick, Stephen C.; Maytin, Edward V.; Pereira, Stephen P.; Palanisami, Akilan; Hasan, Tayyaba

    2016-03-01

    Photodynamic therapy can be a highly complex treatment with more than one parameter to control, or in some cases it is easily implemented with little control other than prescribed drug and light values. The role of measured dosimetry as related to clinical adoption has not been as successful as it could have been, and part of this may be from the conflicting goals of advocating for as many measurements as possible for accurate control, versus companies and clinical adopters advocating for as few measurements as possible, to keep it simple. An organized approach to dosimetry selection is required, which shifts from mechanistic measurements in pre-clinical and early phase I trials, towards just those essential dose limiting measurements and a focus on possible surrogate measures in phase II/III trials. This essential and surrogate approach to dosimetry should help successful adoption of clinical PDT if successful. The examples of essential dosimetry points and surrogate dosimetry tools which might be implemented in phase II and higher trials are discussed for solid tissue PDT with verteporfin and skin lesion treatment with aminolevulinc acid.

  6. Monte Carlo simulations in radiotherapy dosimetry.

    PubMed

    Andreo, Pedro

    2018-06-27

    The use of the Monte Carlo (MC) method in radiotherapy dosimetry has increased almost exponentially in the last decades. Its widespread use in the field has converted this computer simulation technique in a common tool for reference and treatment planning dosimetry calculations. This work reviews the different MC calculations made on dosimetric quantities, like stopping-power ratios and perturbation correction factors required for reference ionization chamber dosimetry, as well as the fully realistic MC simulations currently available on clinical accelerators, detectors and patient treatment planning. Issues are raised that include the necessity for consistency in the data throughout the entire dosimetry chain in reference dosimetry, and how Bragg-Gray theory breaks down for small photon fields. Both aspects are less critical for MC treatment planning applications, but there are important constraints like tissue characterization and its patient-to-patient variability, which together with the conversion between dose-to-water and dose-to-tissue, are analysed in detail. Although these constraints are common to all methods and algorithms used in different types of treatment planning systems, they make uncertainties involved in MC treatment planning to still remain "uncertain".

  7. Chemical dosimetry system for criticality accidents.

    PubMed

    Miljanić, Saveta; Ilijas, Boris

    2004-01-01

    Ruder Bosković Institute (RBI) criticality dosimetry system consists of a chemical dosimetry system for measuring the total (neutron + gamma) dose, and a thermoluminescent (TL) dosimetry system for a separate determination of the gamma ray component. The use of the chemical dosemeter solution chlorobenzene-ethanol-trimethylpentane (CET) is based on the radiolytic formation of hydrochloric acid, which protonates a pH indicator, thymolsulphonphthalein. The high molar absorptivity of its red form at 552 nm is responsible for a high sensitivity of the system: doses in the range 0.2-15 Gy can be measured. The dosemeter has been designed as a glass ampoule filled with the CET solution and inserted into a pen-shaped plastic holder. For dose determinations, a newly constructed optoelectronic reader has been used. The RBI team took part in the International Intercomparison of Criticality Accident Dosimetry Systems at the SILENE Reactor, Valduc, June 2002, with the CET dosimetry system. For gamma ray dose determination TLD-700 TL detectors were used. The results obtained with CET dosemeter show very good agreement with the reference values.

  8. Turbulence modeling for Francis turbine water passages simulation

    NASA Astrophysics Data System (ADS)

    Maruzewski, P.; Hayashi, H.; Munch, C.; Yamaishi, K.; Hashii, T.; Mombelli, H. P.; Sugow, Y.; Avellan, F.

    2010-08-01

    The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-epsilon model, or the standard k-epsilon model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.

  9. Transition between free, mixed and forced convection

    NASA Astrophysics Data System (ADS)

    Jaeger, W.; Trimborn, F.; Niemann, M.; Saini, V.; Hering, W.; Stieglitz, R.; Pritz, B.; Fröhlich, J.; Gabi, M.

    2017-07-01

    In this contribution, numerical methods are discussed to predict the heat transfer to liquid metal flowing in rectangular flow channels. A correct representation of the thermo-hydraulic behaviour is necessary, because these numerical methods are used to perform design and safety studies of components with rectangular channels. Hence, it must be proven that simulation results are an adequate representation of the real conditions. Up to now, the majority of simulations are related to forced convection of liquid metals flowing in circular pipes or rod bundle, because these geometries represent most of the components in process engineering (e.g. piping, heat exchanger). Open questions related to liquid metal heat transfer, among others, is the behaviour during the transition of the heat transfer regimes. Therefore, this contribution aims to provide useful information related to the transition from forced to mixed and free convection, with the focus on a rectangular flow channel. The assessment of the thermo-hydraulic behaviour under transitional heat transfer regimes is pursued by means of system code simulations, RANS CFD simulations, LES and DNS, and experimental investigations. Thereby, each of the results will compared to the others. The comparison of external experimental data, DNS data, RANS data and system code simulation results shows that the global heat transfer can be consistently represented for forced convection in rectangular flow channels by these means. Furthermore, LES data is in agreement with RANS CFD results for different Richardson numbers with respect to temperature and velocity distribution. The agreement of the simulation results among each other and the hopefully successful validation by means of experimental data will fosters the confidence in the predicting capabilities of numerical methods, which can be applied to engineering application.

  10. Etude spectroscopique des collisions moleculaires (hydrogene-azote et hydrogene-oxygene) a des energies de quelques MeV

    NASA Astrophysics Data System (ADS)

    Plante, Jacinthe

    1998-09-01

    Les resultats presentes ici proviennent d'une etude systematique portant sur les collisions a vitesse constante, entre les projectiles d'hydrogene (H+, H2+ et H3+ a 1 MeV/nucleon) et deux cibles gazeuses (N2 et O2), soumises a differentes pressions. Les collisions sont analysees a l'aide des spectres d'emission (de 400 A a 6650 A) et des graphiques intensite/pression. Les spectres ont revele la presence des raies d'azote atomique, d'azote moleculaire, d'oxygene atomique, d'oxygene moleculaire et d'hydrogene atomique. Les raies d'hydrogene sont observees seulement avec les projectiles H2+ et H3+. Donc les processus responsables de la formation de ces raies sont des mecanismes de fragmentation des projectiles. Pour conclure, il existe une difference notable entre les projectiles et les differentes pressions. Les raies d'azote et d'oxygene augmentent selon la pression et les raies d'hydrogene atomique presentent une relation non lineaire avec la pression.

  11. Internal dosimetry technical basis manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-12-20

    The internal dosimetry program at the Savannah River Site (SRS) consists of radiation protection programs and activities used to detect and evaluate intakes of radioactive material by radiation workers. Examples of such programs are: air monitoring; surface contamination monitoring; personal contamination surveys; radiobioassay; and dose assessment. The objectives of the internal dosimetry program are to demonstrate that the workplace is under control and that workers are not being exposed to radioactive material, and to detect and assess inadvertent intakes in the workplace. The Savannah River Site Internal Dosimetry Technical Basis Manual (TBM) is intended to provide a technical and philosophicalmore » discussion of the radiobioassay and dose assessment aspects of the internal dosimetry program. Detailed information on air, surface, and personal contamination surveillance programs is not given in this manual except for how these programs interface with routine and special bioassay programs.« less

  12. Radiation-induced damage analysed by luminescence methods in retrospective dosimetry and emergency response.

    PubMed

    Woda, Clemens; Bassinet, Céline; Trompier, François; Bortolin, Emanuela; Della Monaca, Sara; Fattibene, Paola

    2009-01-01

    The increasing risk of a mass casualty scenario following a large scale radiological accident or attack necessitates the development of appropriate dosimetric tools for emergency response. Luminescence dosimetry has been reliably applied for dose reconstruction in contaminated settlements for several decades and recent research into new materials carried close to the human body opens the possibility of estimating individual doses for accident and emergency dosimetry using the same technique. This paper reviews the luminescence research into materials useful for accident dosimetry and applications in retrospective dosimetry. The properties of the materials are critically discussed with regard to the requirements for population triage. It is concluded that electronic components found within portable electronic devices, such as e.g. mobile phones, are at present the most promising material to function as a fortuitous dosimeter in an emergency response.

  13. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  14. La fibroscopie digestive haute chez 2795 patients au centre hospitalier universitaire-campus de Lomé: les particularités selon le sexe

    PubMed Central

    Lawson-Ananissoh, Laté Mawuli; Bouglouga, Oumboma; Bagny, Aklesso; Kaaga, Laconi; Redah, Datouda

    2014-01-01

    Introduction Notre étude consistera à rapporter les indications et les lésions objectivées à la fibroscopie digestive haute et relever les particularités selon le sexe. Méthodes Étude rétrospective, descriptive sur des résultats de compte-rendu de la fibroscopie digestive haute menée en unité d'endoscopie digestive du service d'hépato-gastro-entérologie du CHU Campus de Lomé du 15 Mai 2009 au 31 Décembre 2013. Résultats La fibroscopie digestive haute a été réalisée chez 2795 patients dont 1188 hommes et 1607 femmes. L’âge moyen était de 40,65 ans (Extrêmes: 5 et 93 ans). La fibroscopie digestive haute était normale chez les femmes que chez les hommes avec une différence statistiquement significative (p = 0,000). Les principales indications étaient: les épigastralgies chez les femmes (p = 0,000); les hémorragies digestives hautes (p = 0,000) et l'hypertension portale (p = 0,000) chez les hommes; 3485 lésions pathologiques ont été observées. La pathologie inflammatoire prédominait (56,3%), la pathologie ulcéreuse (13,89%), la pathologie tumorale (2,01%). Les varices et la candidose œsophagiennes étaient significativement notées chez les hommes. Les ulcérations gastriques (p = 0,000), le reflux biliaire duodéno-gastrique (p = 0,017) étaient plus retrouvés chez les femmes et la gastropathie hypertensive beaucoup plus chez les hommes (p = 0,000). Que les lésions duodénales soient inflammatoires ou ulcéreuses associées ou non à une sténose bulbaire, elles étaient plus fréquentes chez les hommes. Conclusion De manière générale, il y avait une prédominance des lésions inflammatoires chez les femmes, les lésions tumorales et ulcéreuses chez les hommes PMID:25852805

  15. TH-A-204-00: Key Dosimetry Data - Impact of New ICRU Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The ICRU is currently finalizing a report on key data for radiation dosimetry. This multi-year review has resulted in a number of recommendations regarding “fundamental” data that are used in dosimetry related to radiation therapy. This educational session will explain the background for the ICRU committee’s work, the content and conclusions of the report and the impact on outputs, including NIST primary standards, ADCL calibration coefficients and clinical reference dosimetry. Parameters and beam modalities potentially affected by this report include: The mean excitation energy, I, for graphite, air, and water, The average energy required to create an ion pair inmore » dry air (commonly referred to as W/e), The uncertainty in the determination of air kerma in kV xrays The absolute value of Co-60 and Cs-137 primary standards and the dissemination of calibration coefficients, The determination of air kerma strength for Ir-192 HDR brachytherapy sources Ion chamber kQ factors for linac MV beams Ion chamber kQ factors for proton beams. The changes in reference dosimetry that would result from adoption of the ICRU recommendations are of the order of 0.5% to 1%, an effect that will not impact clinical dose delivery but will be detectable in the clinical setting. This session will also outline how worldwide metrology is coordinated through the Convention of the Meter and therefore how the international dosimetry community will proceed with adopting these recommendations so that uniformity from country to country in reference dosimetry is maintained. Timelines and communications methods will also be discussed to ensure that users, such as clinical medical physicists, are not surprised when their chamber’s calibration coefficient apparently changes. Learning Objectives: Understand the background for the ICRU committee’s work on key dosimetry data. Understand the proposed changes to key data and the impacts on reference dosimetry. Understand the methodology and timeline for adoption of the ICRU recommendations.« less

  16. TH-A-204-01: Part I - Key Data for Ionizing-Radiation Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seltzer, S.

    The ICRU is currently finalizing a report on key data for radiation dosimetry. This multi-year review has resulted in a number of recommendations regarding “fundamental” data that are used in dosimetry related to radiation therapy. This educational session will explain the background for the ICRU committee’s work, the content and conclusions of the report and the impact on outputs, including NIST primary standards, ADCL calibration coefficients and clinical reference dosimetry. Parameters and beam modalities potentially affected by this report include: The mean excitation energy, I, for graphite, air, and water, The average energy required to create an ion pair inmore » dry air (commonly referred to as W/e), The uncertainty in the determination of air kerma in kV x-rays The absolute value of Co-60 and Cs-137 primary standards and the dissemination of calibration coefficients, The determination of air kerma strength for Ir-192 HDR brachytherapy sources Ion chamber kQ factors for linac MV beams Ion chamber kQ factors for proton beams. The changes in reference dosimetry that would result from adoption of the ICRU recommendations are of the order of 0.5% to 1%, an effect that will not impact clinical dose delivery but will be detectable in the clinical setting. This session will also outline how worldwide metrology is coordinated through the Convention of the Meter and therefore how the international dosimetry community will proceed with adopting these recommendations so that uniformity from country to country in reference dosimetry is maintained. Timelines and communications methods will also be discussed to ensure that users, such as clinical medical physicists, are not surprised when their chamber’s calibration coefficient apparently changes. Learning Objectives: Understand the background for the ICRU committee’s work on key dosimetry data. Understand the proposed changes to key data and the impacts on reference dosimetry. Understand the methodology and timeline for adoption of the ICRU recommendations.« less

  17. The Mayak Worker Dosimetry System (Mwds-2013): An Introduction to The Documentation

    DOE PAGES

    Napier, B. A.

    2017-03-17

    The reconstruction of radiation doses to Mayak Production Association workers in central Russia supports radiation epidemiological studies for the U.S.-Russian Joint Coordinating Committee on Radiation Effects Research. The most recent version of the dosimetry was performed with the Mayak Worker Dosimetry System-2013. Here, this introduction outlines the logic and general content of the series of articles presented in this issue of Radiation Protection Dosimetry. The articles summarize the models, describe the basis for most of the key decisions made in developing the models and present an overview of the results.

  18. Dosimetry analyses of the Ringhals 3 and 4 reactor pressure vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulesza, J.A.; Fero, A.H.; Rouden, J.

    2011-07-01

    A comprehensive series of neutron dosimetry measurements consisting of surveillance capsules, reactor pressure vessel cladding samples, and ex-vessel neutron dosimetry has been analyzed and compared to the results of three-dimensional, cycle-specific neutron transport calculations for the Ringhals Unit 3 and Unit 4 reactors in Sweden. The comparisons show excellent agreement between calculations and measurements. The measurements also demonstrate that it is possible to perform retrospective dosimetry measurements using the {sup 93}Nb (n,n') {sup 93m}Nb reaction on samples of 18-8 austenitic stainless steel with only trace amounts of elemental niobium. (authors)

  19. The Mayak Worker Dosimetry System (Mwds-2013): An Introduction to The Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B. A.

    The reconstruction of radiation doses to Mayak Production Association workers in central Russia supports radiation epidemiological studies for the U.S.-Russian Joint Coordinating Committee on Radiation Effects Research. The most recent version of the dosimetry was performed with the Mayak Worker Dosimetry System-2013. Here, this introduction outlines the logic and general content of the series of articles presented in this issue of Radiation Protection Dosimetry. The articles summarize the models, describe the basis for most of the key decisions made in developing the models and present an overview of the results.

  20. ESR dosimetry for atomic bomb survivors and radiologic technologists

    NASA Astrophysics Data System (ADS)

    Tatsumi-Miyajima, Junko

    1987-06-01

    An individual absorbed dose for atomic bomb (A-bomb) survivors and radiologic technologists has been estimated using a new personal dosimetry. This dosimetry is based on the electron spin resonance (ESR) spectroscopy of the CO 33- radicals, which are produced in their teeth by radiation. Measurements were carried out to study the characteristics of the dosimetry; the ESR signals of the CO 33- radicals were stable and increased linearly with the radiation dose. In the evaluation of the absorbed dose, the ESR signals were considered to be a function of photon energy. The absorbed doses in ten cases of A-bomb victims and eight cases of radiologic technologists were determined. For A-bomb survivors, the adsorbed doses, which were estimated using the ESR dosimetry, were consistent with the ones obtained using the calculations of the tissue dose in air of A-bomb, and also with the ones obtained using the chromosome measurements. For radiologic technologists, the absorbed doses, which were estimated using the ESR dosimetry, agreed with the ones calculated using the information on the occupational history and conditions. The advantages of this method are that the absorbed dose can be directly estimated by measuring the ESR signals obtained from the teeth of persons, who are exposed to radiation. Therefore, the ESR dosimetry is useful to estimate the accidental exposure and the long term cumulative dose.

  1. Commerce de detail de l'essence automobile: Modelisation de l'impact a court terme des facteurs endogenes et exogenes sur les ventes d'essence dans les stations-service a Montreal

    NASA Astrophysics Data System (ADS)

    Nguimbus, Raphael

    La determination de l'impact des facteurs sous controle et hors controle qui influencent les volumes de vente des magasins de detail qui vendent des produits homogenes et fortement substituables constitue le coeur de cette these. Il s'agit d'estimer un ensemble de coefficients stables et asymtotiquement efficaces non correles avec les effets specifiques aleatoires des sites d'essence dans le marche de Montreal (Quebec, Canada) durant is periode 1993--1997. Le modele econometrique qui est ainsi specifie et teste, isole un ensemble de quatre variables dont le prix de detail affiche dans un site d'essence ordinaire, la capacite de service du site pendant les heures de pointe, les heures de service et le nombre de sites concurrents au voisinage du site dans un rayon de deux kilometres. Ces quatre facteurs influencent les ventes d'essence dans les stations-service. Les donnees en panel avec les methodes d'estimation robustes (estimateur a distance minimale) sont utilisees pour estimer les parametres du modele de vente. Nous partons avec l'hypothese generale selon laquelle il se developpe une force d'attraction qui attire les clients automobilistes dans chaque site, et qui lui permet de realiser les ventes. Cette capacite d'attraction varie d'un site a un autre et cela est du a la combinaison de l'effort marketing et de l'environnement concurrentiel autour du site. Les notions de voisinage et de concurrence spatiale expliquent les comportements des decideurs qui gerent les sites. Le but de cette these est de developper un outil d'aide a la decision (modele analytique) pour permettre aux gestionnaires des chaines de stations-service d'affecter efficacement les ressources commerciales dans ies points de vente.

  2. TREE Preferred Procedures, Selected Electronic Parts.

    DTIC Science & Technology

    1982-01-31

    presented. Chapter 5 covers dosimetry and environmental correlation procedures. Neutron measurements, photon and electron measurements, and pulse...complications from nonuniformity of dose and to provide accurate dosimetry , exposures should be performed under conditions of electron equi- librium. Unless...nonconducting dosimetry materials or test articles are exposed to intense electron beams characteristic of flash X-ray machines, the effect of the potential

  3. Verification of an on line in vivo semiconductor dosimetry system for TBI with two TLD procedures.

    PubMed

    Sánchez-Doblado, F; Terrón, J A; Sánchez-Nieto, B; Arráns, R; Errazquin, L; Biggs, D; Lee, C; Núñez, L; Delgado, A; Muñiz, J L

    1995-01-01

    This work presents the verification of an on line in vivo dosimetry system based on semiconductors. Software and hardware has been designed to convert the diode signal into absorbed dose. Final verification was made in the form of an intercomparison with two independent thermoluminiscent (TLD) dosimetry systems, under TBI conditions.

  4. Develop real-time dosimetry concepts and instrumentation for long term missions

    NASA Technical Reports Server (NTRS)

    Braby, L. A.

    1981-01-01

    The development of a rugged portable dosimetry system, based on microdosimetry techniques, which will measure dose and evaluate dose equivalent in a mixed radiation field is described. Progress in the desired dosimetry system can be divided into three distinct areas: development of the radiation detector, and electron system are presented. The mathematical techniques required are investigated.

  5. Methods and Models of the Hanford Internal Dosimetry Program, PNNL-MA-860

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, Eugene H.; Bihl, Donald E.; Maclellan, Jay A.

    2003-01-03

    This manual describes the technical basis for the design of the routine radiobioassay monitoring program and assessments of internal dose. Its purpose is to provide a historical record of the methods, models, and assumptions used for internal dosimetry at Hanford, and serve as a technical reference for radiation protection and dosimetry staff.

  6. Prise en compte d'un couplage fin neutronique-thermique dans les calculs d'assemblage pour les reacteurs a eau pressurisee

    NASA Astrophysics Data System (ADS)

    Greiner, Nathan

    Core simulations for Pressurized Water Reactors (PWR) is insured by a set of computer codes which allows, under certain assumptions, to approximate the physical quantities of interest, such as the effective multiplication factor or the power or temperature distributions. The neutronics calculation scheme relies on three great steps : -- the production of an isotopic cross-sections library ; -- the production of a reactor database through the lattice calculation ; -- the full-core calculation. In the lattice calculation, in which Boltzmann's transport equation is solved over an assembly geometry, the temperature distribution is uniform and constant during irradiation. This represents a set of approximations since, on the one hand, the temperature distribution in the assembly is not uniform (strong temperature gradients in the fuel pins, discrepancies between the fuel pins) and on the other hand, irradiation causes the thermal properties of the pins to change, which modifies the temperature distribution. Our work aims at implementing and introducing a neutronics-thermomechanics coupling into the lattice calculation to finely discretize the temperature distribution and to study its effects. To perform the study, CEA (Commissariat a l'Energie Atomique et aux Energies Alternatives) lattice code APOLLO2 was used for neutronics and EDF (Electricite De France) code C3THER was used for the thermal calculations. We show very small effects of the pin-scaled coupling when comparing the use of a temperature profile with the use of an uniform temperature over UOX-type and MOX-type fuels. We next investigate the thermal feedback using an assembly-scaled coupling taking into account the presence of large water gaps on an UOX-type assembly at burnup 0. We show the very small impact on the calculation of the hot spot factor. Finally, the coupling is introduced into the isotopic depletion calculation and we show that reactivity and isotopic number densities deviations remain small albeit not negligible for UOX-type and MOX-type assemblies. The specific behavior of gadolinium-stuffed fuel pins in an UO2Gd2O 3-type assembly is highlighted.

  7. Calibration of a mosfet detection system for 6-MV in vivo dosimetry.

    PubMed

    Scalchi, P; Francescon, P

    1998-03-01

    Metal oxide semiconductor field-effect transistor (MOSFET) detectors were calibrated to perform in vivo dosimetry during 6-MV treatments, both in normal setup and total body irradiation (TBI) conditions. MOSFET water-equivalent depth, dependence of the calibration factors (CFs) on the field sizes, MOSFET orientation, bias supply, accumulated dose, incidence angle, temperature, and spoiler-skin distance in TBI setup were investigated. MOSFET reproducibility was verified. The correlation between the water-equivalent midplane depth and the ratio of the exit MOSFET readout divided by the entrance MOSFET readout was studied. MOSFET midplane dosimetry in TBI setup was compared with thermoluminescent dosimetry in an anthropomorphic phantom. By using ionization chamber measurements, the TBI midplane dosimetry was also verified in the presence of cork as a lung substitute. The water-equivalent depth of the MOSFET is about 0.8 mm or 1.8 mm, depending on which sensor side faces the beam. The field size also affects this quantity; Monte Carlo simulations allow driving this behavior by changes in the contaminating electron mean energy. The CFs vary linearly as a function of the square field side, for fields ranging from 5 x 5 to 30 x 30 cm2. In TBI setup, varying the spoiler-skin distance between 5 mm and 10 cm affects the CFs within 5%. The MOSFET reproducibility is about 3% (2 SD) for the doses normally delivered to the patients. The effect of the accumulated dose on the sensor response is negligible. For beam incidence ranging from 0 degrees to 90 degrees, the MOSFET response varies within 7%. No monotonic correlation between the sensor response and the temperature is apparent. Good correlation between the water-equivalent midplane depth and the ratio of the exit MOSFET readout divided by the entrance MOSFET readout was found (the correlation coefficient is about 1). The MOSFET midplane dosimetry relevant to the anthropomorphic phantom irradiation is in agreement with TLD dosimetry within 5%. Ionization chamber and MOSFET midplane dosimetry in inhomogeneous phantoms are in agreement within 2%. MOSFET characteristics are suitable for the in vivo dosimetry relevant to 6-MV treatments, both in normal and TBI setup. The TBI midplane dosimetry using MOSFETs is valid also in the presence of the lung, which is the most critical organ, and allows verifying that calculation of the lung attenuator thicknesses based only on the density is not correct. Our MOSFET dosimetry system can be used also to determine the surface dose by using the water-equivalent depth and extrapolation methods. This procedure depends on the field size used.

  8. On the use of unshielded cables in ionization chamber dosimetry for total-skin electron therapy.

    PubMed

    Chen, Z; Agostinelli, A; Nath, R

    1998-03-01

    The dosimetry of total-skin electron therapy (TSET) usually requires ionization chamber measurements in a large electron beam (up to 120 cm x 200 cm). Exposing the chamber's electric cable, its connector and part of the extension cable to the large electron beam will introduce unwanted electronic signals that may lead to inaccurate dosimetry results. While the best strategy to minimize the cable-induced electronic signal is to shield the cables and its connector from the primary electrons, as has been recommended by the AAPM Task Group Report 23 on TSET, cables without additional shielding are often used in TSET dosimetry measurements for logistic reasons, for example when an automatic scanning dosimetry is used. This paper systematically investigates the consequences and the acceptability of using an unshielded cable in ionization chamber dosimetry in a large TSET electron beam. In this paper, we separate cable-induced signals into two types. The type-I signal includes all charges induced which do not change sign upon switching the chamber polarity, and type II includes all those that do. The type-I signal is easily cancelled by the polarity averaging method. The type-II cable-induced signal is independent of the depth of the chamber in a phantom and its magnitude relative to the true signal determines the acceptability of a cable for use under unshielded conditions. Three different cables were evaluated in two different TSET beams in this investigation. For dosimetry near the depth of maximum buildup, the cable-induced dosimetry error was found to be less than 0.2% when the two-polarity averaging technique was applied. At greater depths, the relative dosimetry error was found to increase at a rate approximately equal to the inverse of the electron depth dose. Since the application of the two-polarity averaging technique requires a constant-irradiation condition, it was demonstrated than an additional error of up to 4% could be introduced if the unshielded cable's spatial configuration were altered during the two-polarity measurements. This suggests that automatic scanning systems with unshielded cables should not be used in TSET ionization chamber dosimetry. However, the data did show that an unshielded cable may be used in TSET ionization chamber dosimetry if the size of cable-induced error in a given TSET beam is pre-evaluated and the measurement is carefully conducted. When such an evaluation has not been performed, additional shielding should be applied to the cable being used, making measurements at multiple points difficult.

  9. Wind turbine wake interactions at field scale: An LES study of the SWiFT facility

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolei; Boomsma, Aaron; Barone, Matthew; Sotiropoulos, Fotis

    2014-06-01

    The University of Minnesota Virtual Wind Simulator (VWiS) code is employed to simulate turbine/atmosphere interactions in the Scaled Wind Farm Technology (SWiFT) facility developed by Sandia National Laboratories in Lubbock, TX, USA. The facility presently consists of three turbines and the simulations consider the case of wind blowing from South such that two turbines are in the free stream and the third turbine in the direct wake of one upstream turbine with separation of 5 rotor diameters. Large-eddy simulation (LES) on two successively finer grids is carried out to examine the sensitivity of the computed solutions to grid refinement. It is found that the details of the break-up of the tip vortices into small-scale turbulence structures can only be resolved on the finer grid. It is also shown that the power coefficient CP of the downwind turbine predicted on the coarse grid is somewhat higher than that obtained on the fine mesh. On the other hand, the rms (root-mean-square) of the CP fluctuations are nearly the same on both grids, although more small-scale turbulence structures are resolved upwind of the downwind turbine on the finer grid.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goke, Sarah Hayes; Elliott, Nathan Ryan

    The Sandia National Laboratories’ Internal Dosimetry Technical Basis Manual is intended to provide extended technical discussion and justification of the internal dosimetry program at SNL. It serves to record the approach to evaluating internal doses from radiobioassay data, and where appropriate, from workplace monitoring data per the Department of Energy Internal Dosimetry Program Guide DOE G 441.1C. The discussion contained herein is directed primarily to current and future SNL internal dosimetrists. In an effort to conserve space in the TBM and avoid duplication, it contains numerous references providing an entry point into the internal dosimetry literature relevant to this program.more » The TBM is not intended to act as a policy or procedure statement, but will supplement the information normally found in procedures or policy documents. The internal dosimetry program outlined in this manual is intended to meet the requirements of Federal Rule 10CFR835 for monitoring the workplace and for assessing internal radiation doses to workers.« less

  11. Role of the lower esophageal sphincter on esophageal acid exposure - a review of over 2000 patients.

    PubMed

    Tsuboi, Kazuto; Hoshino, Masato; Sundaram, Abhishek; Yano, Fumiaki; Mittal, Sumeet K

    2012-01-01

    Three lower esophageal sphincter (LES) characteristics associated with gastro-esophageal reflux disease (GERD) are, LES pressure = 6 mmHg, abdominal length (AL) <1 cm and overall length (OL) <2 cm. The objective of this study was to validate this relationship and evaluate the extent of impact various LES characteristics have on the degree of distal esophageal acid exposure. A retrospective review of a prospectively maintained database identified patients who underwent esophageal manometry and pH studies at Creighton University Medical Center between 1984 and 2008. Patients with esophageal body dysmotility, prior foregut surgery, missing data, no documented symptoms or no pH study, were excluded. Study subjects were categorized as follows: (1) normal LES (N-LES): patients with LES pressure of 6-26 mmHg, AL = 1.0 cm and OL = 2 cm; (2) incompetent LES (Inc-LES): patients with LES pressure <6.0 mmHg orAL <1 cm or OL <2 cm; and (3) hypertensive LES (HTN-LES): patients with LES pressure >26.0 mmHg with AL = 1 cm and OL = 2 cm. The DeMeester score was used to compare differences in acid exposure between different groups. Two thousand and twenty patients satisfied study criteria. Distal esophageal acid exposure as reflected by the DeMeester score in patients with Inc-LES (median=20.05) was significantly higher than in patients with an N-LES (median=9.5), which in turn was significantly higher than in patients with an HTN-LES. Increasing LES pressure and AL provided protection against acid exposure in a graded fashion. Increasing number of inadequate LES characteristics were associated with an increase both in the percentage of patients with abnormal DeMeester score and the degree of acid exposure. LES pressure (=6 mmHg) and AL (<1 cm) are associated with increased lower esophageal acid exposure, and need to be addressed for definitive management of GERD.

  12. Rational evaluation of the therapeutic effect and dosimetry of auger electrons for radionuclide therapy in a cell culture model.

    PubMed

    Shinohara, Ayaka; Hanaoka, Hirofumi; Sakashita, Tetsuya; Sato, Tatsuhiko; Yamaguchi, Aiko; Ishioka, Noriko S; Tsushima, Yoshito

    2018-02-01

    Radionuclide therapy with low-energy auger electron emitters may provide high antitumor efficacy while keeping the toxicity to normal organs low. Here we evaluated the usefulness of an auger electron emitter and compared it with that of a beta emitter for tumor treatment in in vitro models and conducted a dosimetry simulation using radioiodine-labeled metaiodobenzylguanidine (MIBG) as a model compound. We evaluated the cellular uptake of 125 I-MIBG and the therapeutic effects of 125 I- and 131 I-MIBG in 2D and 3D PC-12 cell culture models. We used a Monte Carlo simulation code (PHITS) to calculate the absorbed radiation dose of 125 I or 131 I in computer simulation models for 2D and 3D cell cultures. In the dosimetry calculation for the 3D model, several distribution patterns of radionuclide were applied. A higher cumulative dose was observed in the 3D model due to the prolonged retention of MIBG compared to the 2D model. However, 125 I-MIBG showed a greater therapeutic effect in the 2D model compared to the 3D model (respective EC 50 values in the 2D and 3D models: 86.9 and 303.9 MBq/cell), whereas 131 I-MIBG showed the opposite result (respective EC 50 values in the 2D and 3D models: 49.4 and 30.2 MBq/cell). The therapeutic effect of 125 I-MIBG was lower than that of 131 I-MIBG in both models, but the radionuclide-derived difference was smaller in the 2D model. The dosimetry simulation with PHITS revealed the influence of the radiation quality, the crossfire effect, radionuclide distribution, and tumor shape on the absorbed dose. Application of the heterogeneous distribution series dramatically changed the radiation dose distribution of 125 I-MIBG, and mitigated the difference between the estimated and measured therapeutic effects of 125 I-MIBG. The therapeutic effect of 125 I-MIBG was comparable to that of 131 I-MIBG in the 2D model, but the efficacy was inferior to that of 131 I-MIBG in the 3D model, since the crossfire effect is negligible and the homogeneous distribution of radionuclides was insufficient. Thus, auger electrons would be suitable for treating small-sized tumors. The design of radiopharmaceuticals with auger electron emitters requires particularly careful consideration of achieving a homogeneous distribution of the compound in the tumor.

  13. Reevaluation of the AAPM TG-43 brachytherapy dosimetry parameters for an 125I seed, and the influence of eye plaque design on dose distributions and dose-volume histograms

    NASA Astrophysics Data System (ADS)

    Aryal, Prakash

    The TG-43 dosimetry parameters of the Advantage(TM) 125I model IAI-125A brachytherapy seed were studied. An investigation using modern MCNP radiation transport code with updated cross-section libraries was performed. Twelve different simulation conditions were studied for a single seed by varying the coating thickness, mass density, photon energy spectrum and cross-section library. The dose rate was found to be 6.3% lower at 1 cm in comparison to published results. New TG-43 dosimetry parameters are proposed. The dose distribution for a brachytherapy eye plaque, model EP917, was investigated, including the effects of collimation from high-Z slots. Dose distributions for 26 slot designs were determined using Monte Carlo methods and compared between the published literature, a clinical treatment planning system, and physical measurements. The dosimetric effect of the composition and mass density of the gold backing was shown to be less than 3%. Slot depth, width, and length changed the central axis (CAX) dose distributions by < 1% per 0.1 mm in design variation. Seed shifts in the slot towards the eye and shifts of the 125I-laden silver rod within the seed had the greatest impact on the CAX dose distribution, changing it by 14%, 9%, 4.3%, and 2.7% at 1, 2, 5, and 10 mm, respectively, from the inner scleral surface. The measured, full plaque slot geometry delivered 2.4% +/- 1.1% higher dose along the plaque's CAX than the geometry provided by the manufacturer and 2.2%+/-2.3% higher than Plaque Simulator(TM) (PS) treatment planning software (version 5.7.6). The D10 for the simulated tumor, inner sclera, and outer sclera for the measured slot plaque to manufacturer provided slot design was 9%, 10%, and 19% higher, respectively. In comparison to the measured plaque design, a theoretical plaque having narrow and deep slots delivered 30%, 37%, and 62% lower D 10 doses to the tumor, inner sclera, and outer sclera, respectively. CAX doses at --1, 0, 1, and 2 mm were also lower by a factor of 2.6, 1.72, 1.50, and 1.39, respectively. The study identified substantial sensitivity of the EP917 plaque dose distributions to slot design. KEYWORDS: Monte Carlo methods, dosimetry, 125I, TG-43, eye plaque brachytherapy.

  14. Dosimetric evaluation of nanotargeted (188)Re-liposome with the MIRDOSE3 and OLINDA/EXM programs.

    PubMed

    Chang, Chih-Hsien; Chang, Ya-Jen; Lee, Te-Wei; Ting, Gann; Chang, Kwo-Ping

    2012-06-01

    The OLINDA/EXM computer code was created as a replacement for the widely used MIRDOSE3 code for radiation dosimetry in nuclear medicine. A dosimetric analysis with these codes was performed to evaluate nanoliposomes as carriers of radionuclides ((188)Re-liposomes) in colon carcinoma-bearing mice. Pharmacokinetic data for (188)Re-N, N-bis (2-mercaptoethyl)-N',N'-diethylethylenediamine ((188)Re-BMEDA) and (188)Re-liposome were obtained for estimation of absorbed doses in normal organs. Radiation dose estimates for normal tissues were calculated using the MIRDOSE3 and OLINDA/EXM programs for a colon carcinoma solid tumor mouse model. Mean absorbed doses derived from(188)Re-BMEDA and (188)Re-liposome in normal tissues were generally similar as calculated by MIRDOSE3 and OLINDA/EXM programs. One notable exception to this was red marrow, wherein MIRDOSE3 resulted in higher absorbed doses than OLINDA/EXM (1.53- and 1.60-fold for (188)Re-BMEDA and (188)Re-liposome, respectively). MIRDOSE3 and OLINDA have very similar residence times and organ doses. Bone marrow doses were estimated by designating cortical bone rather than bone marrow as a source organ. The bone marrow doses calculated by MIRDOSE3 are higher than those by OLINDA. If the bone marrow is designated as a source organ, the doses estimated by MIRDOSE3 and OLINDA programs will be very similar.

  15. Monte Carlo simulations for angular and spatial distributions in therapeutic-energy proton beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Pan, C. Y.; Chiang, K. J.; Yuan, M. C.; Chu, C. H.; Tsai, Y. W.; Teng, P. K.; Lin, C. H.; Chao, T. C.; Lee, C. C.; Tung, C. J.; Chen, A. E.

    2017-11-01

    The purpose of this study is to compare the angular and spatial distributions of therapeutic-energy proton beams obtained from the FLUKA, GEANT4 and MCNP6 Monte Carlo codes. The Monte Carlo simulations of proton beams passing through two thin targets and a water phantom were investigated to compare the primary and secondary proton fluence distributions and dosimetric differences among these codes. The angular fluence distributions, central axis depth-dose profiles, and lateral distributions of the Bragg peak cross-field were calculated to compare the proton angular and spatial distributions and energy deposition. Benchmark verifications from three different Monte Carlo simulations could be used to evaluate the residual proton fluence for the mean range and to estimate the depth and lateral dose distributions and the characteristic depths and lengths along the central axis as the physical indices corresponding to the evaluation of treatment effectiveness. The results showed a general agreement among codes, except that some deviations were found in the penumbra region. These calculated results are also particularly helpful for understanding primary and secondary proton components for stray radiation calculation and reference proton standard determination, as well as for determining lateral dose distribution performance in proton small-field dosimetry. By demonstrating these calculations, this work could serve as a guide to the recent field of Monte Carlo methods for therapeutic-energy protons.

  16. Implementation of new physics models for low energy electrons in liquid water in Geant4-DNA.

    PubMed

    Bordage, M C; Bordes, J; Edel, S; Terrissol, M; Franceries, X; Bardiès, M; Lampe, N; Incerti, S

    2016-12-01

    A new alternative set of elastic and inelastic cross sections has been added to the very low energy extension of the Geant4 Monte Carlo simulation toolkit, Geant4-DNA, for the simulation of electron interactions in liquid water. These cross sections have been obtained from the CPA100 Monte Carlo track structure code, which has been a reference in the microdosimetry community for many years. They are compared to the default Geant4-DNA cross sections and show better agreement with published data. In order to verify the correct implementation of the CPA100 cross section models in Geant4-DNA, simulations of the number of interactions and ranges were performed using Geant4-DNA with this new set of models, and the results were compared with corresponding results from the original CPA100 code. Good agreement is observed between the implementations, with relative differences lower than 1% regardless of the incident electron energy. Useful quantities related to the deposited energy at the scale of the cell or the organ of interest for internal dosimetry, like dose point kernels, are also calculated using these new physics models. They are compared with results obtained using the well-known Penelope Monte Carlo code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Syndromes drépanocytaires majeurs et infections associées chez l ’enfant au Burkina Faso

    PubMed Central

    Douamba, Sonia; Nagalo, Kisito; Tamini, Laure; Traoré, Ismaël; Kam, Madibèlè; Kouéta, Fla; Yé, Diarra

    2017-01-01

    Introduction Le but de cette étude était d’étudier les infections chez les enfants présentant un syndrome drépanocytaire majeur. Méthodes Étude hospitalière monocentrique, rétrospective descriptive sur dix années menée à Ouagadougou, Burkina Faso. Étaient inclus tous les enfants porteurs d'un syndrome drépanocytaire majeur (homozygote SS et double hétérozygote SC, SDPunjab, Sβ thalassémique, SOArab et SE) hospitalisés pour une infection bactérienne confirmée à la microbiologie. Résultats Cent trente trois patients répondaient à nos critères d’inclusion. Le phénotype SS représentait 63,2% des cas et le SC 36,8%. La fréquence des infections était de 21,8%. Celles-ci touchaient dans 45,9% des cas les enfants âgés de 0 à 5 ans. Les signes les plus fréquents étaient les douleurs ostéoarticulaires (42,1%), la toux (25,7%), les douleurs abdominales (23,3%), la pâleur (43,6%). Les broncho-pneumopathies (31,6%), le paludisme (16,5%), les ostéomyélites (12,8%) et les septicémies (10,5%) étaient les principaux diagnostics trouvés. Les agents pathogènes isolés étaient Streptococcus pneumoniae (35,5%) et Salmonella sp (33,3%). Les céphalosporines de 3e génération étaient les antibiotiques les plus fréquemment prescrits. Le taux brut de mortalité était de 7,5%. Conclusion Les infections bactériennes et le paludisme dominent le tableau des infections chez l'enfant drépanocytaire majeur au Centre Hospitalier Universitaire Pédiatrique Charles De Gaulle. Les auteurs recommandent la mise en place d’un programme national de prise en charge de la drépanocytose, ce qui permettrait de prévenir voire réduire la survenue des infections chez les enfants drépanocytaires. PMID:28450986

  18. Facteurs de risque de la tuberculose multi-résistante dans la ville de Kinshasa en République Démocratique du Congo

    PubMed Central

    Misombo-Kalabela, André; Nguefack-Tsague, Georges; Kalla, Ginette Claude Mireille; Ze, Emmanuel Afane; Diangs, Kimpanga; Panda, Tshapenda; Kebela, Ilunga; Fueza, Serge Bisuta; Magazani, Nzanzu; Mbopi-Kéou, François-Xavier

    2016-01-01

    Introduction L'objectif de cette étude était de déterminer les facteurs de risque associés à la tuberculose multi résistance à Kinshasa en République Démocratique du Congo. Méthodes Il s'agissait d'une étude cas témoins. Les cas comprenaient tous les patients tuberculeux résistants à la rifampicine et à l'isoniazide notifiés à Kinshasa de janvier 2012 à juin 2013. Les témoins étaient les patients tuberculeux traités durant la même période que les cas et qui à la fin du traitement étaient déclarés guéris. Pour cette étude, nous avons obtenu une clairance éthique. Résultats L’échantillon était constitué de 213 participants dont 132 hommes (62%) et 81 femmes (38%). L’âge médian était de 31ans (16-73 ans). Les facteurs associés significatifs (p< 0,05) à la tuberculose multi résistante étaient le non-respect des heures de prise de médicaments (0R = 111) (80% chez les cas et 4% chez les témoins), l’échec au traitement (0R = 20) (76% chez les cas et 13% chez les témoins); la notion de tuberculose multi résistante dans la famille (0R = 6.4) (28% chez les cas et 6% chez les témoins); la méconnaissance de la tuberculose multi résistante (0R = 3.2) (31% chez les cas et 59% chez les témoins); un séjour en prison (0R = 7.6) (10% chez les cas et 1% chez les témoins) et l'interruption du traitement (0R = 6.1) ( 59% chez les cas et 19% chez les témoins). Conclusion L’émergence de la tuberculose multi résistante peut être évitée par la mise en place des stratégies de diagnostic et de traitement appropriées. PMID:27516818

  19. List of Standards to Accompany Manual of Documentation Practices Applicable to Defence-Aerospace Scientific and Technical Information (Liste des Normes a Placer en Annexe au Manuel Concernant les Techniques Documentaires Applicables a l’Information Scientifique et Technique de la Defense et du Secteur Aerospatial)

    DTIC Science & Technology

    1990-10-01

    CHARACTERS ISO 0233 1984 DOCUMENTATION - TRANSLITERATION OF ARABIC CHARACTERS INTO LATIN CHARACTERS ISO 0259 1954 DOCUMENTATION - TRANSLITERATION OF HEBREW...TRANSLITERATION OF ARABIC CHARACTERS IN LATIN CHARACTERS SF I 46-DUO 1N64 TRANSLITERATION - TRANSLITERATION OF HEBREW IN LATIN CHARACTERS . 46-010...LANGUAGE CODES (ANNIE: AUT.ORITY SYMSOLS DIN 31 634 CONVERSION OF THE GREEN ALUBABET DIN 31 635 CONVERSION OF THE ARABIC ALPHABET DIN 31 635 CONVERSION OF

  20. Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds

    NASA Astrophysics Data System (ADS)

    Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.

    2015-12-01

    An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.

  1. FX-25 and FX-100 Propagation Experiments.

    DTIC Science & Technology

    1982-07-01

    Radiochromic Foil Dosimetry Blue cellophane is one of the most widely used radiochromic film dosimeters.6 Blue cellophane exposed to an intense electron ...shown in Fig. 18, Appendix B. Thermoluminescent Dosimetry Lithium flouride thermoluminescent dosimeters ( TLDs ) were on a limited number of shots to...corroboration of the current distribution included radiochromic-film dosimetry , TLD arrays, and openshutter photography. Because of our discovery of the

  2. TRIAGE of Irradiated Personnel

    DTIC Science & Technology

    1996-09-25

    Vivo Electron Paramagnetic Resonance, Electron Spin Resonance (EPR, ESR) for In Vivo Dosimetry Under Field Conditions Dr. Harold M. Swartz Dartmouth...Force Medical Center Andrews Air Force Base, MD • Status and Limitations of Physical Dosimetry in the Field Environment David A. Schauer, LCDR, MSC...USN Naval Dosimetry Center Navy Environmental Health Center Detachment Bethesda, MD • NATO Policy and Guidance on Antiemetic Usage Robert Kehlet

  3. Hanford internal dosimetry program manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, E.H.; Sula, M.J.; Bihl, D.E.

    1989-10-01

    This document describes the Hanford Internal Dosimetry program. Program Services include administrating the bioassay monitoring program, evaluating and documenting assessments of internal exposure and dose, ensuring that analytical laboratories conform to requirements, selecting and applying appropriate models and procedures for evaluating internal radionuclide deposition and the resulting dose, and technically guiding and supporting Hanford contractors in matters regarding internal dosimetry. 13 refs., 16 figs., 42 tabs.

  4. USAFSAM Review and Analysis of Radiofrequency Radiation Bioeffects Literature: Second Report.

    DTIC Science & Technology

    1982-05-01

    10 Cellular 11 Mechanisms of interaction 12 Environmental 13 Medical applications 14 Review 15 Ecological 16 Physical methods/dosimetry 17 Other 18...APPLICATIONS List of Analyses ......... .................... 137 (14) REVIEW List of Analyses ......... .................... 138 (16) PHYSICAL METHODS/DOSIMETRY...physiological 10 Cellular 11 Mechanisms of interaction 12 Environmental 13 Medical applications 14 Review 15 Ecological 16 Physical methods/dosimetry 17

  5. Measurement of Libby Amphibole (LA) Elongated Particle Dissolution Rates and Alteration of Size/Shape Distributions in Support of Human Dosimetry Model Development and Relative Potency Determinations

    EPA Science Inventory

    To maximize the value of toxicological data in development of human health risk assessment models of inhaled elongated mineral particles, improvements in human dosimetry modeling are needed. In order to extend the dosimetry model of deposited fibers (Asgharian et aI., Johnson 201...

  6. In vivo thermoluminescence dosimetry for total body irradiation.

    PubMed

    Palkosková, P; Hlavata, H; Dvorák, P; Novotný, J; Novotný, J

    2002-01-01

    An improvement in the clinical results obtained using total body irradiation (TBI) with photon beams requires precise TBI treatment planning, reproducible irradiation, precise in vivo dosimetry, accurate documentation and careful evaluation. In vivo dosimetry using LiF Harshaw TLD-100 chips was used during the TBI treatments performed in our department. The results of in vivo thermoluminescence dosimetry (TLD) show that using TLD measurements and interactive adjustment of some treatment parameters based on these measurements, like monitor unit calculations, lung shielding thickness and patient positioning, it is possible to achieve high precision in absorbed dose delivery (less than 0.5%) as well as in homogeneity of irradiation (less than 6%).

  7. CIEMAT EXTERNAL DOSIMETRY SERVICE: ISO/IEC 17025 ACCREDITATION AND 3 Y OF OPERATIONAL EXPERIENCE AS AN ACCREDITED LABORATORY.

    PubMed

    Romero, A M; Rodríguez, R; López, J L; Martín, R; Benavente, J F

    2016-09-01

    In 2008, the CIEMAT Radiation Dosimetry Service decided to implement a quality management system, in accordance with established requirements, in order to achieve ISO/IEC 17025 accreditation. Although the Service comprises the approved individual monitoring services of both external and internal radiation, this paper is specific to the actions taken by the External Dosimetry Service, including personal and environmental dosimetry laboratories, to gain accreditation and the reflections of 3 y of operational experience as an accredited laboratory. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Gamma response characterizations of optically stimulated luminescence (OSL) affects personal dosimetry

    NASA Astrophysics Data System (ADS)

    Monthonwattana, S.; Esor, J.; Rungseesumran, T.; Intang, A.

    2017-06-01

    Optically Stimulated Luminescence (OSL) is the current technique of personal dosimetry changed by Nuclear Technology Service Center instead of Thermoluminescence dosimetry (TLD) because OSL has more advantages, such as repeat reading and elimination of heating process. In this study, OSL was used to test the gamma response characterizations. Detailed OSL investigation on personal dosimetry was carried out in the dose range of 0.2 - 3.0 mSv. The batch homogeneity was 7.66%. R2 value of the linear regression was 0.9997. The difference ratio of angular dependence at ± 60° was 8.7%. Fading of the reading was about 3%.

  9. 3D dosimetry by optical-CT scanning

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2006-12-01

    The need for an accurate, practical, low-cost 3D dosimetry system is becoming ever more critical as modern dose delivery techniques increase in complexity and sophistication. A recent report from the Radiological Physics Center (RPC) (1), revealed that 38% of institutions failed the head-and-neck IMRT phantom credentialing test at the first attempt. This was despite generous passing criteria (within 7% dose-difference or 4mm distance-to-agreement) evaluated at a half-dozen points and a single axial plane. The question that arises from this disturbing finding is - what percentage of institutions would have failed if a comprehensive 3D measurement had been feasible, rather than measurements restricted to the central film-plane and TLD points? This question can only be adequately answered by a comprehensive 3D-dosimetry system, which presents a compelling argument for its development as a clinically viable low cost dosimetry solution. Optical-CT dosimetry is perhaps the closest system to providing such a comprehensive solution. In this article, we review the origins and recent developments of optical-CT dosimetry systems. The principle focus is on first generation systems known to have highest accuracy but longer scan times.

  10. Nuclear accident dosimetry intercomparison studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sims, C.S.

    1989-09-01

    Twenty-two nuclear accident dosimetry intercomparison studies utilizing the fast-pulse Health Physics Research Reactor at the Oak Ridge National Laboratory have been conducted since 1965. These studies have provided a total of 62 different organizations a forum for discussion of criticality accident dosimetry, an opportunity to test their neutron and gamma-ray dosimetry systems under a variety of simulated criticality accident conditions, and the experience of comparing results with reference dose values as well as with the measured results obtained by others making measurements under identical conditions. Sixty-nine nuclear accidents (27 with unmoderated neutron energy spectra and 42 with eight different shieldedmore » spectra) have been simulated in the studies. Neutron doses were in the 0.2-8.5 Gy range and gamma doses in the 0.1-2.0 Gy range. A total of 2,289 dose measurements (1,311 neutron, 978 gamma) were made during the intercomparisons. The primary methods of neutron dosimetry were activation foils, thermoluminescent dosimeters, and blood sodium activation. The main methods of gamma dose measurement were thermoluminescent dosimeters, radiophotoluminescent glass, and film. About 68% of the neutron measurements met the accuracy guidelines (+/- 25%) and about 52% of the gamma measurements met the accuracy criterion (+/- 20%) for accident dosimetry.« less

  11. EANM Dosimetry Committee series on standard operational procedures for pre-therapeutic dosimetry II. Dosimetry prior to radioiodine therapy of benign thyroid diseases.

    PubMed

    Hänscheid, Heribert; Canzi, Cristina; Eschner, Wolfgang; Flux, Glenn; Luster, Markus; Strigari, Lidia; Lassmann, Michael

    2013-07-01

    The EANM Dosimetry Committee Series "Standard Operational Procedures for Pre-Therapeutic Dosimetry" (SOP) provides advice to scientists and clinicians on how to perform patient-specific absorbed dose assessments. This particular SOP describes how to tailor the therapeutic activity to be administered for radioiodine therapy of benign thyroid diseases such as Graves' disease or hyperthyroidism. Pretherapeutic dosimetry is based on the assessment of the individual (131)I kinetics in the target tissue after the administration of a tracer activity. The present SOP makes proposals on the equipment to be used and guides the user through the measurements. Time schedules for the measurement of the fractional (131)I uptake in the diseased tissue are recommended and it is shown how to calculate from these datasets the therapeutic activity necessary to administer a predefined target dose in the subsequent therapy. Potential sources of error are pointed out and the inherent uncertainties of the procedures depending on the number of measurements are discussed. The theoretical background and the derivation of the listed equations from compartment models of the iodine kinetics are explained in a supplementary file published online only.

  12. Overview of physical dosimetry methods for triage application integrated in the new European network RENEB.

    PubMed

    Trompier, François; Burbidge, Christopher; Bassinet, Céline; Baumann, Marion; Bortolin, Emanuela; De Angelis, Cinzia; Eakins, Jonathan; Della Monaca, Sara; Fattibene, Paola; Quattrini, Maria Cristina; Tanner, Rick; Wieser, Albrecht; Woda, Clemens

    2017-01-01

    In the EC-funded project RENEB (Realizing the European Network in Biodosimetry), physical methods applied to fortuitous dosimetric materials are used to complement biological dosimetry, to increase dose assessment capacity for large-scale radiation/nuclear accidents. This paper describes the work performed to implement Optically Stimulated Luminescence (OSL) and Electron Paramagnetic Resonance (EPR) dosimetry techniques. OSL is applied to electronic components and EPR to touch-screen glass from mobile phones. To implement these new approaches, several blind tests and inter-laboratory comparisons (ILC) were organized for each assay. OSL systems have shown good performances. EPR systems also show good performance in controlled conditions, but ILC have also demonstrated that post-irradiation exposure to sunlight increases the complexity of the EPR signal analysis. Physically-based dosimetry techniques present high capacity, new possibilities for accident dosimetry, especially in the case of large-scale events. Some of the techniques applied can be considered as operational (e.g. OSL on Surface Mounting Devices [SMD]) and provide a large increase of measurement capacity for existing networks. Other techniques and devices currently undergoing validation or development in Europe could lead to considerable increases in the capacity of the RENEB accident dosimetry network.

  13. Advances in Inhalation Dosimetry Models and Methods for Occupational Risk Assessment and Exposure Limit Derivation

    PubMed Central

    Kuempel, Eileen D.; Sweeney, Lisa M.; Morris, John B.; Jarabek, Annie M.

    2015-01-01

    The purpose of this article is to provide an overview and practical guide to occupational health professionals concerning the derivation and use of dose estimates in risk assessment for development of occupational exposure limits (OELs) for inhaled substances. Dosimetry is the study and practice of measuring or estimating the internal dose of a substance in individuals or a population. Dosimetry thus provides an essential link to understanding the relationship between an external exposure and a biological response. Use of dosimetry principles and tools can improve the accuracy of risk assessment, and reduce the uncertainty, by providing reliable estimates of the internal dose at the target tissue. This is accomplished through specific measurement data or predictive models, when available, or the use of basic dosimetry principles for broad classes of materials. Accurate dose estimation is essential not only for dose-response assessment, but also for interspecies extrapolation and for risk characterization at given exposures. Inhalation dosimetry is the focus of this paper since it is a major route of exposure in the workplace. Practical examples of dose estimation and OEL derivation are provided for inhaled gases and particulates. PMID:26551218

  14. Performance of Al2O3:C optically stimulated luminescence dosimeters for clinical radiation therapy applications.

    PubMed

    Hu, B; Wang, Y; Zealey, W

    2009-12-01

    A commercial Optical Stimulated Luminescence (OSL) dosimetry system developed by Landauer was tested to analyse the possibility of using OSL dosimetry for external beam radiotherapy planning checks. Experiments were performed to determine signal sensitivity, dose response range, beam type/energy dependency, reproducibility and linearity. Optical annealing processes to test OSL material reusability were also studied. In each case the measurements were converted into absorbed dose. The experimental results show that OSL dosimetry provides a wide dose response range, good linearity and reproducibility for the doses up to 800cGy. The OSL output is linear with dose up to 600cGy range showing a maximum deviation from linearity of 2.0% for the doses above 600cGy. The standard deviation in response of 20 dosimeters was 3.0%. After optical annealing using incandescent light, the readout intensity decreased by approximately 98% in the first 30 minutes. The readout intensity, I, decreased after repeated optical annealing as a power law, given by I infinity t (-1.3). This study concludes that OSL dosimetry can provide an alternative dosimetry technique for use in in-vivo dosimetry if rigorous measurement protocols are established.

  15. Whole-body voxel-based personalized dosimetry: Multiple voxel S-value approach for heterogeneous media with non-uniform activity distributions.

    PubMed

    Lee, Min Sun; Kim, Joong Hyun; Paeng, Jin Chul; Kang, Keon Wook; Jeong, Jae Min; Lee, Dong Soo; Lee, Jae Sung

    2017-12-14

    Personalized dosimetry with high accuracy is becoming more important because of the growing interests in personalized medicine and targeted radionuclide therapy. Voxel-based dosimetry using dose point kernel or voxel S-value (VSV) convolution is available. However, these approaches do not consider medium heterogeneity. Here, we propose a new method for whole-body voxel-based personalized dosimetry for heterogeneous media with non-uniform activity distributions, which is referred to as the multiple VSV approach. Methods: The multiple numbers (N) of VSVs for media with different densities covering the whole-body density ranges were used instead of using only a single VSV for water. The VSVs were pre-calculated using GATE Monte Carlo simulation; those were convoluted with the time-integrated activity to generate density-specific dose maps. Computed tomography-based segmentation was conducted to generate binary maps for each density region. The final dose map was acquired by the summation of N segmented density-specific dose maps. We tested several sets of VSVs with different densities: N = 1 (single water VSV), 4, 6, 8, 10, and 20. To validate the proposed method, phantom and patient studies were conducted and compared with direct Monte Carlo, which was considered the ground truth. Finally, patient dosimetry (10 subjects) was conducted using the multiple VSV approach and compared with the single VSV and organ-based dosimetry approaches. Errors at the voxel- and organ-levels were reported for eight organs. Results: In the phantom and patient studies, the multiple VSV approach showed significant improvements regarding voxel-level errors, especially for the lung and bone regions. As N increased, voxel-level errors decreased, although some overestimations were observed at lung boundaries. In the case of multiple VSVs ( N = 8), we achieved voxel-level errors of 2.06%. In the dosimetry study, our proposed method showed much improved results compared to the single VSV and organ-based dosimetry. Errors at the organ-level were -6.71%, 2.17%, and 227.46% for the single VSV, multiple VSV, and organ-based dosimetry, respectively. Conclusion: The multiple VSV approach for heterogeneous media with non-uniform activity distributions offers fast personalized dosimetry at whole-body level, yielding results comparable to those of the direct Monte Carlo approach. Copyright © 2017 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  16. Whole-remnant and maximum-voxel SPECT/CT dosimetry in {sup 131}I-NaI treatments of differentiated thyroid cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mínguez, Pablo, E-mail: pablo.minguezgabina@osakid

    Purpose: To investigate the possible differences between SPECT/CT based whole-remnant and maximum-voxel dosimetry in patients receiving radio-iodine ablation treatment of differentiated thyroid cancer (DTC). Methods: Eighteen DTC patients were administered 1.11 GBq of {sup 131}I-NaI after near-total thyroidectomy and rhTSH stimulation. Two patients had two remnants, so in total dosimetry was performed for 20 sites. Three SPECT/CT scans were performed for each patient at 1, 2, and 3–7 days after administration. The activity, the remnant mass, and the maximum-voxel activity were determined from these images and from a recovery-coefficient curve derived from experimental phantom measurements. The cumulated activity was estimatedmore » using trapezoidal-exponential integration. Finally, the absorbed dose was calculated using S-values for unit-density spheres in whole-remnant dosimetry and S-values for voxels in maximum-voxel dosimetry. Results: The mean absorbed dose obtained from whole-remnant dosimetry was 40 Gy (range 2–176 Gy) and from maximum-voxel dosimetry 34 Gy (range 2–145 Gy). For any given patient, the activity concentrations for each of the three time-points were approximately the same for the two methods. The effective half-lives varied (R = 0.865), mainly due to discrepancies in estimation of the longer effective half-lives. On average, absorbed doses obtained from whole-remnant dosimetry were 1.2 ± 0.2 (1 SD) higher than for maximum-voxel dosimetry, mainly due to differences in the S-values. The method-related differences were however small in comparison to the wide range of absorbed doses obtained in patients. Conclusions: Simple and consistent procedures for SPECT/CT based whole-volume and maximum-voxel dosimetry have been described, both based on experimentally determined recovery coefficients. Generally the results from the two approaches are consistent, although there is a small, systematic difference in the absorbed dose due to differences in the S-values, and some variability due to differences in the estimated effective half-lives, especially when the effective half-life is long. Irrespective of the method used, the patient absorbed doses obtained span over two orders of magnitude.« less

  17. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 05: Not all geometries are equivalent for magnetic field Fano cavity tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malkov, Victor N.; Rogers, David W.O.

    The coupling of MRI and radiation treatment systems for the application of magnetic resonance guided radiation therapy necessitates a reliable magnetic field capable Monte Carlo (MC) code. In addition to the influence of the magnetic field on dose distributions, the question of proper calibration has arisen due to the several percent variation of ion chamber and solid state detector responses in magnetic fields when compared to the 0 T case (Reynolds et al., Med Phys, 2013). In the absence of a magnetic field, EGSnrc has been shown to pass the Fano cavity test (a rigorous benchmarking tool of MC codes)more » at the 0.1 % level (Kawrakow, Med.Phys, 2000), and similar results should be required of magnetic field capable MC algorithms. To properly test such developing MC codes, the Fano cavity theorem has been adapted to function in a magnetic field (Bouchard et al., PMB, 2015). In this work, the Fano cavity test is applied in a slab and ion-chamber-like geometries to test the transport options of an implemented magnetic field algorithm in EGSnrc. Results show that the deviation of the MC dose from the expected Fano cavity theory value is highly sensitive to the choice of geometry, and the ion chamber geometry appears to pass the test more easily than larger slab geometries. As magnetic field MC codes begin to be used for dose simulations and correction factor calculations, care must be taken to apply the most rigorous Fano test geometries to ensure reliability of such algorithms.« less

  18. Les recommandations thérapeutiques relatives aux effets secondaires extrapyramidaux associés à l’utilisation d’antipsychotiques de deuxième génération chez les enfants et les adolescents

    PubMed Central

    Pringsheim, Tamara; Doja, Asif; Belanger, Stacey; Patten, Scott

    2012-01-01

    HISTORIQUE ET OBJECTIF : L’utilisation d’antipsychotiques augmente chez les enfants. Le présent article visait à orienter les cliniciens quant à la prise en charge clinique des effets secondaires extrapyramidaux des antipsychotiques de deuxième génération. MÉTHODOLOGIE : Les publications, les entrevues avec des informateurs clés et des échanges avec les membres d’un groupe de discussion et les partenaires ont permis de déterminer les principaux secteurs cliniques d’orientation et les préférences quant à la structure des présentes recommandations. Les membres responsables des lignes directrices ont reçu le projet de recommandations, ont évalué l’information recueillie grâce à une analyse bibliographique systématique et ont utilisé un processus de groupe nominal pour parvenir à un consensus quant aux recommandations thérapeutiques. Les lignes directrices contiennent une description des anomalies neurologiques souvent observées avec l’utilisation d’antipsychotiques ainsi que les recommandations sur le moyen d’examiner et de quantifier ces anomalies. Une démarche séquentielle sur la prise en charge des anomalies neurologiques est présentée. RÉSULTATS : On peut observer plusieurs types de symptômes extrapyramidaux attribuables à l’utilisation d’antipsychotiques chez les enfants, y compris la dystonie aiguë, l’akathisie, le parkinsonisme et la dyskinésie tardive, toutes induites par les neuroleptiques, de même que la dystonie tardive, l’akathisie tardive et les dyskinésies de sevrage. La forte majorité des données probantes sur le traitement des troubles du mouvement induits par les antipsychotiques proviennent de patients adultes atteints de schizophrénie. Étant donné le peu de données pédiatriques, les recommandations découlent de publications portant tant sur des adultes que sur des enfants. Compte tenu des limites de généralisation des données provenant de sujets adultes pour des enfants, il faudrait évaluer ces recommandations d’après les avis d’experts plutôt que d’après les données probantes. CONCLUSION : Les cliniciens doivent savoir que les antipsychotiques de deuxième génération ont le potentiel d’induire des effets secondaires neurologiques et devraient faire preuve d’une extrême vigilance lorsqu’ils en prescrivent. PMID:24082814

  19. A Comparison of Singlet Oxygen Explicit Dosimetry (SOED) and Singlet Oxygen Luminescence Dosimetry (SOLD) for Photofrin-Mediated Photodynamic Therapy

    PubMed Central

    Kim, Michele M.; Penjweini, Rozhin; Gemmell, Nathan R.; Veilleux, Israel; McCarthy, Aongus; Buller, Gerald S.; Hadfield, Robert H.; Wilson, Brian C.; Zhu, Timothy C.

    2016-01-01

    Accurate photodynamic therapy (PDT) dosimetry is critical for the use of PDT in the treatment of malignant and nonmalignant localized diseases. A singlet oxygen explicit dosimetry (SOED) model has been developed for in vivo purposes. It involves the measurement of the key components in PDT—light fluence (rate), photosensitizer concentration, and ground-state oxygen concentration ([3O2])—to calculate the amount of reacted singlet oxygen ([1O2]rx), the main cytotoxic component in type II PDT. Experiments were performed in phantoms with the photosensitizer Photofrin and in solution using phosphorescence-based singlet oxygen luminescence dosimetry (SOLD) to validate the SOED model. Oxygen concentration and photosensitizer photobleaching versus time were measured during PDT, along with direct SOLD measurements of singlet oxygen and triplet state lifetime (τΔ and τt), for various photosensitizer concentrations to determine necessary photophysical parameters. SOLD-determined cumulative [1O2]rx was compared to SOED-calculated [1O2]rx for various photosensitizer concentrations to show a clear correlation between the two methods. This illustrates that explicit dosimetry can be used when phosphorescence-based dosimetry is not feasible. Using SOED modeling, we have also shown evidence that SOLD-measured [1O2]rx using a 523 nm pulsed laser can be used to correlate to singlet oxygen generated by a 630 nm laser during a clinical malignant pleural mesothelioma (MPM) PDT protocol by using a conversion formula. PMID:27929427

  20. A Comparison of Singlet Oxygen Explicit Dosimetry (SOED) and Singlet Oxygen Luminescence Dosimetry (SOLD) for Photofrin-Mediated Photodynamic Therapy.

    PubMed

    Kim, Michele M; Penjweini, Rozhin; Gemmell, Nathan R; Veilleux, Israel; McCarthy, Aongus; Buller, Gerald S; Hadfield, Robert H; Wilson, Brian C; Zhu, Timothy C

    2016-12-06

    Accurate photodynamic therapy (PDT) dosimetry is critical for the use of PDT in the treatment of malignant and nonmalignant localized diseases. A singlet oxygen explicit dosimetry (SOED) model has been developed for in vivo purposes. It involves the measurement of the key components in PDT-light fluence (rate), photosensitizer concentration, and ground-state oxygen concentration ([³ O ₂])-to calculate the amount of reacted singlet oxygen ([¹ O ₂] rx ), the main cytotoxic component in type II PDT. Experiments were performed in phantoms with the photosensitizer Photofrin and in solution using phosphorescence-based singlet oxygen luminescence dosimetry (SOLD) to validate the SOED model. Oxygen concentration and photosensitizer photobleaching versus time were measured during PDT, along with direct SOLD measurements of singlet oxygen and triplet state lifetime ( τ Δ and τ t ), for various photosensitizer concentrations to determine necessary photophysical parameters. SOLD-determined cumulative [¹ O ₂] rx was compared to SOED-calculated [¹ O ₂] rx for various photosensitizer concentrations to show a clear correlation between the two methods. This illustrates that explicit dosimetry can be used when phosphorescence-based dosimetry is not feasible. Using SOED modeling, we have also shown evidence that SOLD-measured [¹ O ₂] rx using a 523 nm pulsed laser can be used to correlate to singlet oxygen generated by a 630 nm laser during a clinical malignant pleural mesothelioma (MPM) PDT protocol by using a conversion formula.

  1. Personnel neutron dosimetry using electrochemically etched CR-39 foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hankins, D.E.; Homann, S.; Westermark, J.

    1986-09-17

    A personnel neutron dosimetry system has been developed based on the electrochemical etching of CR-39 plastic at elevated temperatures. The doses obtained using this dosimeter system are more accurate than those obtained using other dosimetry systems, especially when varied neutron spectra are encountered. This Cr-39 dosimetry system does not have the severe energy dependence that exists with albedo neutron dosimeters or the fading and reading problems encountered with NTA film. The dosimetry system employs an electrochemical etch procedure that be used to process large numbers of Cr-39 dosimeters. The etch procedure is suitable for operations where the number of personnelmore » requires that many CR-39 dosimeters be processed. Experience shows that one full-time technician can etch and evaluate 2000 foils per month. The energy response to neutrons is fairly flat from about 80 keV to 3.5 MeV, but drops by about a factor of three in the 13 to 16 MeV range. The sensitivity of the dosimetry system is about 7 tracks/cm/sup 2//mrem, with a background equivalent to about 8 mrem for new CR-39 foils. The limit of sensitivity is approximately 10 mrem. The dosimeter has a significant variation in directional dependence, dropping to about 20% at 90/sup 0/. This dosimeter has been used for personnel neutron dosimetry at the Lawrence Livermore National Laboratory for more tha 18 months. 6 refs., 23 figs., 2 tabs.« less

  2. Facteurs de risque dans le trouble déficitaire de l’attention et de l’hyperactivité: étude familiale

    PubMed Central

    Poissant, Hélène; Rapin, Lucile

    2012-01-01

    Résumé Objectif: Notre étude a pour but d’évaluer les facteurs de risque associés au trouble déficitaire de l’attention et de l’hyperactivité (TDAH) en termes de comorbidités et de facteurs d’adversité à l’intérieur des familles avec un TDAH. Méthodologie: 137 parents de 104 enfants avec un TDAH et 40 parents de 34 enfants contrôles ont répondu aux items d’un questionnaire. Des tests Chi-carrés et des tests de Student ont mesuré l’association de chaque item avec les groupes et les différences entre les groupes. Résultats: Les enfants avec un TDAH avaient des performances scolaires plus faibles et une plus forte prévalence des troubles d’apprentissage, oppositionnel, des conduites et anxieux que celle des enfants contrôles. Des difficultés d’apprentissage étaient plus souvent rapportées chez les pères d’enfants avec un TDAH. Par ailleurs, l’isolement social et les accidents de la route étaient davantage présents chez les mères d’enfants avec un TDAH. Ces dernières souffraient plus de dépression et de trouble anxieux et prenaient davantage de médicaments que les mères contrôles. Conclusion: L’étude de facteurs de risque révèle un lien entre les parents et les enfants, spécifiquement la présence de dépression parmi les mères d’enfants avec un TDAH et de difficultés d’apprentissage chez les pères, suggérant une composante familiale dans le trouble. La sous-représentation du TDAH chez les pères d’enfants avec un TDAH est discutée. PMID:23133459

  3. Flexible Inhibitor Fluid-Structure Interaction Simulation in RSRM.

    NASA Astrophysics Data System (ADS)

    Wasistho, Bono

    2005-11-01

    We employ our tightly coupled fluid/structure/combustion simulation code 'Rocstar-3' for solid propellant rocket motors to study 3D flows past rigid and flexible inhibitors in the Reusable Solid Rocket Motor (RSRM). We perform high resolution simulations of a section of the rocket near the center joint slot at 100 seconds after ignition, using inflow conditions based on less detailed 3D simulations of the full RSRM. Our simulations include both inviscid and turbulent flows (using LES dynamic subgrid-scale model), and explore the interaction between the inhibitor and the resulting fluid flow. The response of the solid components is computed by an implicit finite element solver. The internal mesh motion scheme in our block-structured fluid solver enables our code to handle significant changes in geometry. We compute turbulent statistics and determine the compound instabilities originated from the natural hydrodynamic instabilities and the inhibitor motion. The ultimate goal is to studdy the effect of inhibitor flexing on the turbulent field.

  4. Aeroacoustic Simulation of a Nose Landing Gear in an Open Jet Facility Using FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.; Carlson, Jan-Renee

    2012-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida s open-jet acoustic facility known as UFAFF. The unstructured-grid flow solver, FUN3D, developed at NASA Langley Research center is used to compute the unsteady flow field for this configuration. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions compare favorably with the measured data. Unsteady flowfield data obtained from the FUN3D code are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the sound pressure levels at microphones placed in the farfield. Significant improvement in predicted noise levels is obtained when the flowfield data from the open jet UFAFF simulations is used as compared to the case using flowfield data from the closed-wall BART configuration.

  5. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    NASA Astrophysics Data System (ADS)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asadi, Somayeh; Masoudi, Seyed Farhad, E-mail: masoudi@kntu.ac.ir; Shahriari, Majid

    In ophthalmic brachytherapy dosimetry, it is common to consider the water phantom as human eye anatomy. However, for better clinical analysis, there is a need for the dose determination in different parts of the eye. In this work, a full human eye is simulated with MCNP-4C code by considering all parts of the eye, i.e., the lens, cornea, retina, choroid, sclera, anterior chamber, optic nerve, and bulk of the eye comprising vitreous body and tumor. The average dose in different parts of this full model of the human eye is determined and the results are compared with the dose calculatedmore » in water phantom. The central axes depth dose and the dose in whole of the tumor for these 2 simulated eye models are calculated as well, and the results are compared.« less

  7. Release of RANKERN 16A

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Murphy, Christophe; Dobson, Geoff

    2017-09-01

    RANKERN 16 is the latest version of the point-kernel gamma radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS Software Service. RANKERN is well established in the UK shielding community for radiation shielding and dosimetry assessments. Many important developments have been made available to users in this latest release of RANKERN. The existing general 3D geometry capability has been extended to include import of CAD files in the IGES format providing efficient full CAD modelling capability without geometric approximation. Import of tetrahedral mesh and polygon surface formats has also been provided. An efficient voxel geometry type has been added suitable for representing CT data. There have been numerous input syntax enhancements and an extended actinide gamma source library. This paper describes some of the new features and compares the performance of the new geometry capabilities.

  8. Algorithmes de couplage RANS et ecoulement potentiel

    NASA Astrophysics Data System (ADS)

    Gallay, Sylvain

    Dans le processus de developpement d'avion, la solution retenue doit satisfaire de nombreux criteres dans de nombreux domaines, comme par exemple le domaine de la structure, de l'aerodynamique, de la stabilite et controle, de la performance ou encore de la securite, tout en respectant des echeanciers precis et minimisant les couts. Les geometries candidates sont nombreuses dans les premieres etapes de definition du produit et de design preliminaire, et des environnements d'optimisations multidisciplinaires sont developpes par les differentes industries aeronautiques. Differentes methodes impliquant differents niveaux de modelisations sont necessaires pour les differentes phases de developpement du projet. Lors des phases de definition et de design preliminaires, des methodes rapides sont necessaires afin d'etudier les candidats efficacement. Le developpement de methodes ameliorant la precision des methodes existantes tout en gardant un cout de calcul faible permet d'obtenir un niveau de fidelite plus eleve dans les premieres phases de developpement du projet et ainsi grandement diminuer les risques associes. Dans le domaine de l'aerodynamisme, les developpements des algorithmes de couplage visqueux/non visqueux permettent d'ameliorer les methodes de calcul lineaires non visqueuses en methodes non lineaires prenant en compte les effets visqueux. Ces methodes permettent ainsi de caracteriser l'ecoulement visqueux sur les configurations et predire entre autre les mecanismes de decrochage ou encore la position des ondes de chocs sur les surfaces portantes. Cette these se focalise sur le couplage entre une methode d'ecoulement potentiel tridimensionnelle et des donnees de section bidimensionnelles visqueuses. Les methodes existantes sont implementees et leurs limites identifiees. Une methode originale est ensuite developpee et validee. Les resultats sur une aile elliptique demontrent la capacite de l'algorithme a de grands angles d'attaques et dans la region post-decrochage. L'algorithme de couplage a ete compare a des donnees de plus haute fidelite sur des configurations issues de la litterature. Un modele de fuselage base sur des relations empiriques et des simulations RANS a ete teste et valide. Les coefficients de portance, de trainee et de moment de tangage ainsi que les coefficients de pression extraits le long de l'envergure ont montre un bon accord avec les donnees de soufflerie et les modeles RANS pour des configurations transsoniques. Une configuration a geometrie hypersustentatoire a permis d'etudier la modelisation des surfaces hypersustentees de la methode d'ecoulement potentiel, demontrant que la cambrure peut etre prise en compte uniquement dans les donnees visqueuses.

  9. Evaluation des prescriptions antibiotiques au service des urgences de l’Hôpital Militaire d’Instruction Mohammed V (HMIMV)

    PubMed Central

    Elbouti, Anass; Rafai, Mostafa; Chouaib, Naoufal; Jidane, Said; Belkouch, Ahmed; Bakkali, Hicham; Belyamani, Lahcen

    2016-01-01

    Cette étude à pour objectifs de décrire les pratiques des prescriptions, évaluer leur pertinence et leur conformité aux règles d’utilisations et étudier les facteurs susceptibles de les influencer. Il s’agit d’une étude transversale d’évaluation des prescriptions antibiotiques portant sur 105 patients réalisée au service des urgences médico-chirurgicales de l’H.M.I.Med V de Rabat sur une période d’un mois. Le recueil des données était fait à l’aide d’un questionnaire rapportant les données démographiques et anamnestiques, les antécédents, la notion d’allergie, les données spécifiques de l’examen clinique, les données para cliniques, la prescription détaillée de l’antibiotique. Les données récoltées ont été ensuite évaluées par un médecin référent, chargé d’indiquer les éventuelles erreurs de traitement. Parmi les infections ayant motivé la prescription des antibiotiques, les affections des systèmes respiratoires et urinaires étaient au premier rang, les familles d’antibiotiques les plus couramment employées sont les pénicillines, les quinolones et les céphalosporines. 74 prescriptions soit (70,5%) étaient à la fois pertinentes et conformes contre 9 prescriptions soit (8,6%) justifiées mais non pertinentes et 6 prescriptions soit (5,7%) étaient jugées injustifiées par le médecin référent par absence d’infection. Les évaluations des pratiques médicales sont rarement menées dans les établissements de santé; c’est dans ce cadre que nous avons voulu nous inscrire en produisant cette étude afin d’améliorer la pertinence de nos prescriptions antibiotiques et d’optimiser leur conformité aux différentes recommandations. PMID:28292124

  10. La prévention des blessures causées par les véhicules tout-terrains

    PubMed Central

    Yanchar, Natalie L

    2012-01-01

    RÉSUMÉ Les véhicules tout-terrains (VTT) sont largement utilisés au Canada dans le cadre des loisirs, du transport et du travail, tel que l’agriculture. En qualité de véhicules automobiles, ils peuvent être particulièrement dangereux lorsqu’ils sont utilisés par des enfants et des jeunes adolescents qui ne possèdent pas les connaissances, la taille physique, la force et les compétences cognitives et motrices nécessaires pour les conduire en toute sécurité. La magnitude du risque de blessures pour les jeunes conducteurs est exposée de manière explicite dans les avertissements figurant dans le manuel du conducteur et sur les étiquettes des modèles les plus récents. Elle est également et démontrée par le nombre important d’hospitalisations et de décès pédiatriques attribuables à des traumatismes liés aux VTT. Cependant, le port du casque est loin d’être universel chez les jeunes conducteurs, et les comportements de conduite non sécuritaires demeurent courants, tels que la conduite sans supervision ou avec des passagers. Malgré les avertissements de l’industrie et l’éducation publique qui font ressortir l’importance de comportements sécuritaires et les risques de graves blessures chez les enfants et les adolescents, on continue de recenser des blessures et des décès liés aux VTT. Tant que des mesures n’auront pas été prises pour réduire ces blessures de manière substantielle et efficace, il est essentiel de limiter la conduite par des jeunes, notamment ceux de moins de 16 ans, afin de réduire le fardeau des traumatismes liés aux VTT chez les enfants et les adolescents. Le présent document remplace le document de principes de la Société canadienne de pédiatrie publié en 2004.

  11. ESR/Alanine gamma-dosimetry in the 10-30 Gy range.

    PubMed

    Fainstein, C; Winkler, E; Saravi, M

    2000-05-01

    We report Alanine Dosimeter preparation, procedures for using the ESR/Dosimetry method, and the resulting calibration curve for gamma-irradiation in the range from 10-30 Gy. We use calibration curve to measure the irradiation dose in gamma-irradiation of human blood, as required in Blood Transfusion Therapy. The ESR/Alanine results are compared against those obtained using the thermoluminescent dosimetry (TLD) method.

  12. USAF Summer Research Program - 1993 High School Apprenticeship Program Final Reports, Volume 12, Armstrong Laboratory

    DTIC Science & Technology

    1993-12-01

    on Panasonic TLD . Panasonic Industrial Company; Secaucus, New Jersey. 5. Thurlow, Ronald M. "Neutron Dosimetry Using a Panasonic Thermoluminescent...Radiation Dosimetry Branch Brooks Air Force Base San Antonio, Texas 78235 Final Report for: AFOSR Summer Research Program Armstrong Laboratory Sponsored...Associate Radiation Dosimetry Branch Armstrong Laboratory Abstract In an attempt to improve personnel monitoring for neutron emissions, Panasonic has

  13. Approche en soins primaires pour les problèmes de consommation de cannabis

    PubMed Central

    Turner, Suzanne D.; Spithoff, Sheryl; Kahan, Meldon

    2014-01-01

    Résumé Objectif Étudier les caractéristiques et complications cliniques de la consommation à risque de cannabis et du trouble de consommation de cannabis, et présenter un protocole en cabinet pour le dépistage, l’identification et la prise en charge de ces problèmes. Sources des données Une recherche des essais contrôlés, des études d’observation et des révisions sur l’usage de cannabis par les adolescents et les jeunes adultes; les méfaits psychiatriques et médicaux liés au cannabis; le trouble de consommation de cannabis et son traitement; et les lignes directrices sur la consommation à faible risque de cannabis a été effectuée dans PubMed. Message principal Les médecins doivent questionner tous leurs patients quant à leur usage de cannabis. Ils doivent questionner plus souvent les adolescents et jeunes adultes de même que les personnes qui présentent un risque élevé de problèmes liés au cannabis (qui ont un trouble psychiatrique ou de consommation de drogue concomitant). Les problèmes pouvant être causés par le cannabis, comme les troubles de l’humeur, la psychose et les symptômes respiratoires, devraient susciter des questions sur la consommation de cannabis. Aux patients qui rapportent un usage de cannabis, les médecins devraient poser des questions sur la fréquence et la quantité consommée, la présence de symptômes de tolérance ou de sevrage, les tentatives de réduire leur consommation et la présence de problèmes liés au cannabis. Les usagers à faible risque fument, inhalent ou ingèrent le cannabis occasionnellement sans aucun signe de dysfonctionnement scolaire, professionnel ou social; les personnes dont l’usage est problématique consomment tous les jours ou presque tous les jours, ont de la difficulté à réduire leur consommation et leur fonctionnement scolaire, professionnel et social est perturbé. Les médecins devraient offrir à tous les patients dont l’usage est problématique des conseils et un bref counseling, en insistant sur les effets du cannabis sur la santé et en visant l’abstinence (certains groupes à risque élevé devraient s’abstenir complètement de consommer du cannabis) ou la réduction de la consommation, et ils doivent fournir des stratégies pratiques de réduction de la consommation. Les techniques d’entrevue motivationnelle doivent faire partie des séances de counseling. Les médecins devraient aiguiller les patients qui sont incapables de réduire leur consommation ou qui présentent des problèmes liés à leur usage de cannabis vers des soins spécialisés, tout en veillant à ce qu’ils demeurent en contact avec leur généraliste. De plus, les médecins devraient donner à tous les usagers de cannabis de l’information sur la consommation à faible risque. Conclusion Les médecins devraient effectuer au moins une fois chez tous leurs patients de leur pratique un test de dépistage de l’usage de cannabis, particulièrement chez les patients qui présentent des problèmes pouvant être causés par le cannabis. Les tests de dépistage doivent être plus fréquents chez les personnes à risque, soit au moins tous les ans. Il faut savoir distinguer la consommation à faible risque de l’usage problématique. Les patients dont l’usage est problématique doivent recevoir de brèves séances de counseling et ces patients doivent être aiguillés vers un spécialiste s’ils sont incapables de réduire leur consommation ou d’y mettre un terme.

  14. Perceptions des jeunes victimes de violence sexuelle au sein de leurs relations amoureuses sur leur pire expérience

    PubMed Central

    Van Camp, Tinneke; Hébert, Martine; Fernet, Mylène; Blais, Martin; Lavoie, Francine

    2016-01-01

    Cette étude explore les pires expériences vécues dans les relations amoureuses de jeunes qui ont rapporté avoir vécu de la violence sexuelle dans une relation de couple récente. Quelles sont les situations perçues comme étant les plus difficiles par les jeunes et est-ce que celles-ci se limitent à des incidents violents? Le questionnaire sur les parcours amoureux des jeunes (PAJ) a été complété par des jeunes Québécois âgés de 14 à 18 ans. Au total, plus de 600 participants ont rapporté au moins un épisode de violence sexuelle (souvent en combinaison avec d’autres formes de violence). Nous présentons les résultats de l’analyse qualitative inductive fondée sur une question ouverte concernant la pire expérience vécue. Les observations suggèrent que, en plus des expériences de violence, les difficultés relationnelles, les ruptures amoureuses et les sentiments amoureux non réciproques sont des situations particulièrement difficiles selon les propos des jeunes. Ces différents enjeux vécus par les jeunes devraient être pris en considération dans l’offre de services d’intervention à leur intention. PMID:28191266

  15. Groundwater flow and solute transport at the Mourquong saline-water disposal basin, Murray Basin, southeastern Australia

    NASA Astrophysics Data System (ADS)

    Simmons, Craig; Narayan, Kumar; Woods, Juliette; Herczeg, Andrew

    2002-03-01

    Saline groundwater and drainage effluent from irrigation are commonly stored in some 200 natural and artificial saline-water disposal basins throughout the Murray-Darling Basin of Australia. Their impact on underlying aquifers and the River Murray, one of Australia's major water supplies, is of serious concern. In one such scheme, saline groundwater is pumped into Lake Mourquong, a natural groundwater discharge complex. The disposal basin is hydrodynamically restricted by low-permeability lacustrine clays, but there are vulnerable areas in the southeast where the clay is apparently missing. The extent of vertical and lateral leakage of basin brines and the processes controlling their migration are examined using (1) analyses of chloride and stable isotopes of water (2H/1H and 18O/16O) to infer mixing between regional groundwater and lake water, and (2) the variable-density groundwater flow and solute-transport code SUTRA. Hydrochemical results indicate that evaporated disposal water has moved at least 100 m in an easterly direction and that there is negligible movement of brines in a southerly direction towards the River Murray. The model is used to consider various management scenarios. Salt-load movement to the River Murray was highest in a "worst-case" scenario with irrigation employed between the basin and the River Murray. Present-day operating conditions lead to little, if any, direct movement of brine from the basin into the river. Résumé. Les eaux souterraines salées et les effluents de drainage de l'irrigation sont stockés dans environ 200 bassins naturels ou artificiels destinés à retenir les eaux salines dans tout le bassin de Murray-Darling, en Australie. Leur impact sur les aquifères sous-jacents et sur la rivière Murray, l'une des principales ressources en eau d'Australie, constitue un problème grave. Dans une telle situation, les eaux souterraines salines sont pompées dans le lac Mourquong, complexe dans lequel les nappes se déchargent naturellement. Le bassin de stockage est isolé hydrodynamiquement par des argiles lacustres de faible perméabilité, mais il existe des zones vulnérables au sud-est, là où les argiles sont apparemment absentes. L'importance des fuites verticales et latérales des saumures du bassin et les processus contrôlant leur migration ont été étudiés au moyen (1) d'analyses de chlorures et des isotopes stables de l'eau (2H/1H et 18O/16O) pour définir le mélange entre les eaux souterraines régionales et l'eau du lac, et (2) du code SUTRA d'écoulement souterrain et de transport de soluté d'eaux de densités variables. Les résultats hydrochimiques indiquent que l'eau de stockage évaporée s'est introduite d'au moins 100 m vers l'est et qu'il existe un écoulement négligeable de saumures vers le sud, en direction de la rivière Murray. Le modèle permet de considérer différents scénarios de gestion. L'écoulement des eaux salées vers la rivière Murray était le scénario le pire du fait de l'irrigation qui est appliquée entre le bassin de stockage et la rivière Murray. Les conditions actuelles de fonctionnement produisent un écoulement direct faible, sinon nul, des saumures du bassin vers la rivière. Resumen. Las aguas subterráneas salinas y los retornos de riego se almacenan habitualmente en unas 200 balsas naturales y artificiales de deshechos, situadas a lo largo de la Cuenca de los ríos Murray-Darling (Australia). Su impacto en los acuíferos y en el propio río Murray, que actúa como una de las fuentes principales de abastecimiento de agua en Australia, es un asunto preocupante. En uno de estos lugares, las aguas subterráneas salinizadas son bombeadas al lago Mourquong, que es un complejo natural de descarga del acuífero. La balsa de eliminación está revestida con arcillas lacustres de baja permeabilidad, pero hay áreas vulnerables hacia el sudeste, donde parece no haber arcilla. Se examina el alcance de la precolación vertical y lateral de las salmueras contenidas en la balsa y de los procesos que controlan su migración por medio de (1) análisis de cloruros e isótopos estables del agua (2H/1H y 18O/16O), con el fin de determinar la mezcla entre las aguas subterráneas regionales y las lacustres, y (2) el código numérico SUTRA, que permite modelar flujo de densidad variable y transporte de solutos. Los resultados hidroquímicos indican que el agua evaporada se ha desplazado al menos 100 m hacia el Este y que no hay un movimiento apreciable de la salmuera hacia el río Murray, situado al Sur. Se emplea el modelo para considerar varios escenarios de gestión. La carga de sal hacia el río Murray es mayor en el escenario "más negativo", que incluye irrigación entre la balsa y el propio río. Las condiciones actuales de funcionamiento llevan a un pequeño (si existe) movimiento de la salmuera desde la balsa hasta el río.

  16. Validation de modeles d'eclairement incident a la surface de l'eau en Arctique

    NASA Astrophysics Data System (ADS)

    Julien, Laliberte

    Dans ce memoire, deux methodes d'estimation d'eclairement incident a la surface de l'Arctique sont evaluees. Une base de donnees in situ a ete constituee a partir de 16 campagnes oceanographiques en Arctique. Pour les dates ou l'eclairement est mesure, les estimations d'eclairement journalier incident a la surface obtenues a partir des satellites de la couleur de l'eau (Frouin et al. 2003) et a partir des satellites meteorologiques (Belanger et al. 2013) sont produites. De meme, un exercise de comparaison entre les estimations satellitaires est produit pour l'annee 2004 sur tout le territoire Arctique. La comparaison entre les donnees observees et les donnees estimees a partir des satellites meteorologiques donnent un biais de 6% et une quadratique moyenne 33%. La comparaison entre les observations et les satellites de la couleur de l'eau donnent un biais de 2% et 20%. Finalement, la difference moyenne entre les estimations des 2 methodes d'estimation satellitaires pour tout l'Arctique pour l'annee 2004 est de 0,29 Einstein/m2/jour avec un ecart-type de 6,78 Einstein/m2/jour. Les resultats montrent entre autres que la methode qui utilise les satellites de la couleur de l'eau est plus precise pour estimer l'eclairement sur une petite superficie puisqu'elle rend mieux les variations locales dans l'eclairement. La methode qui utilise les satellites meteorologique est plus precise pour estimer l'eclairement sur une grande superficie, puisqu'elle est moins restreinte dans les conditions qui permettent de fournir une estimation. Ainsi, la methode qui utilise les satellites meteorologiques montre qu'un eclairement annuel de l'Arctique de 38% n'est pas prise en compte par les satellites de la couleur de l'eau.

  17. Déprescription des antihyperglycémiants chez les personnes âgées

    PubMed Central

    Farrell, Barbara; Black, Cody; Thompson, Wade; McCarthy, Lisa; Rojas-Fernandez, Carlos; Lochnan, Heather; Shamji, Salima; Upshur, Ross; Bouchard, Manon; Welch, Vivian

    2017-01-01

    Résumé Objectif Formuler des lignes directrices fondées sur les données probantes afin d’aider les cliniciens à décider du moment et de la façon sécuritaire de réduire la dose des antihyperglycémiants, de mettre fin au traitement ou de passer à un autre agent chez les personnes âgées. Méthodes Nous nous sommes concentrés sur les données les plus probantes disponibles et avons cherché à obtenir les commentaires des professionnels de première ligne durant le processus de rédaction, de révision et d’adoption des lignes directrices. L’équipe était formée de 7 professionnels de la santé (2 médecins de famille, 3 pharmaciens, 1 infirmière praticienne et 1 endocrinologue) et d’une spécialiste de la méthodologie; les membres ont divulgué tout conflit d’intérêts. Nous avons eu recours à un processus rigoureux, y compris l’approche GRADE (Grading of Recommendations Assessment, Development and Evaluation) pour formuler les lignes directrices. Nous avons effectué une revue systématique dans le but d’évaluer les données probantes indiquant les bienfaits et les torts liés à la déprescription des antihyperglycémiants. Nous avons révisé les revues des torts liés à la poursuite du traitement antihyperglycémiant, et effectué des synthèses narratives des préférences des patients et des répercussions sur les ressources. Ces synthèses et évaluations de la qualité des données selon l’approche GRADE ont servi à formuler les recommandations. L’équipe a peaufiné le texte sur le contenu et les recommandations des lignes directrices par consensus et a synthétisé les considérations cliniques afin de répondre aux questions courantes des cliniciens de première ligne. Une version préliminaire des lignes directrices a été distribuée aux cliniciens et aux intervenants aux fins d’examen, et des révisions ont été apportées au texte à chaque étape. Un algorithme d’appui décisionnel a été conçu pour accompagner les lignes directrices. Recommandations Nous recommandons de déprescrire les antihyperglycémiants reconnus pour contribuer à l’hypoglycémie chez les personnes âgées à risque ou dans les situations où les antihyperglycémiants pourraient causer d’autres effets indésirables, et d’individualiser les cibles et de déprescrire en conséquence chez les personnes frêles, atteintes de démence ou dont l’espérance de vie est limitée. Conclusion Les présentes lignes directrices émettent des recommandations pratiques pour décider du moment et de la façon de déprescrire les antihyperglycémiants. Elles visent à contribuer au processus de décision conjointement avec le patient et non à le dicter. PMID:29138168

  18. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, Mina R.; Georgiadis, Nicholas J.; DeBonis, James R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing Large-Eddy Simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the highorder method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  19. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, M. R.; Georgiadis, N. J.; DeBonis, J. R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing large-eddy simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the high-order method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  20. Révision systématique des effets de la fréquence des repas en famille sur les résultats psychosociaux chez les jeunes

    PubMed Central

    Harrison, Megan E.; Norris, Mark L.; Obeid, Nicole; Fu, Maeghan; Weinstangel, Hannah; Sampson, Margaret

    2015-01-01

    Résumé Objectif Effectuer une révision systématique des effets de repas en famille fréquents sur les résultats psychosociaux chez les enfants et les adolescents et examiner s’il existe des différences dans les résultats selon le sexe. Sources des données Des études ont été cernées à la suite d’une recherche dans MEDLINE (de 1948 à la dernière semaine de juin 2011) et dans PsycINFO (de 1806 à la première semaine de juillet 2011) à l’aide de l’interface Ovide. Les expressions et mots clés MeSH utilisés seuls ou en combinaisons étaient les suivants : family, meal, food intake, nutrition, diets, body weight, adolescent attitudes, eating behaviour, feeding behaviour et eating disorders. Les bibliographies des articles jugés pertinents ont aussi été passées en revus. Sélection des études La recherche initiale a produit 1783 articles. Pour être incluses dans l’analyse, les études devaient répondre aux critères suivants : être publiées en anglais dans une revue révisée par des pairs; porter sur des enfants ou des adolescents; traiter de l’influence des repas en famille sur les paramètres psychosociaux (p. ex. consommation de drogues et autres substances, troubles de l’alimentation, dépression) chez les enfants ou les adolescents; avoir une conception d’étude appropriée, notamment des méthodes statistiques acceptables pour l’analyse des paramètres. Quatorze articles satisfaisaient aux critères d’inclusion. Deux examinateurs indépendants ont étudié et analysé les articles. Synthèse Dans l’ensemble, les résultats font valoir que la fréquence des repas en famille est inversement proportionnelle aux troubles de l’alimentation, à la consommation d’alcool et de drogues, aux comportements violents, aux sentiments de dépression ou aux pensées suicidaires chez les adolescents. Il existe une relation positive entre de fréquents repas en famille, une bonne estime de soi et la réussite scolaire. Les études révèlent des différences considérables dans les résultats chez les enfants et adolescents masculins et féminins, les sujets féminins ayant des résultats plus positifs. Conclusion Cette révision systématique vient confirmer davantage qu’il convient de préconiser de fréquents repas en famille. Tous les professionnels de la santé devraient renseigner les familles concernant les bienfaits de prendre régulièrement ensemble des repas.

  1. L’infection bactérienne chez le patient brûlé

    PubMed Central

    Le Floch, R.; Naux, E.; Arnould, J.F.

    2015-01-01

    Summary La mort d’un patient brûlé est le plus souvent causée par une infection, bactérienne dans la grande majorité des cas. La perte de la barrière cutanée, les dispositifs invasifs et l’immunodépression liée à la brûlure sont trois mécanismes concourant à la survenue de ces infections. Chez un patient inflammatoire, les signes infectieux généraux d’infection sont peu discriminants. Du fait de la gravité des infections chez ce patient, leur prévention est un paramètre essentiel de la prise en charge. En raison des particularités pharmacocinétiques des brûlés, les posologies d’antibiotiques doivent être adaptés et les dosages sanguins doivent être systématiques. A l’heure où les résistances deviennent préoccupantes, les recherches sur les thérapeutiques sur les alternatives thérapeutiques parmi lesquels les inhibiteurs de facteurs de virulence, les peptides antimicrobiens, les polyphénols, l’immunothérapie…) deviennent cruciales. L’une des possibilités thérapeutiques les plus prometteuses semble être la phagothérapie. PMID:27252607

  2. Dosimetry procedures for an industrial irradiation plant

    NASA Astrophysics Data System (ADS)

    Grahn, Ch.

    Accurate and reliable dosimetry procedures constitute a very important part of process control and quality assurance at a radiation processing plant. γ-Dose measurements were made on the GBS 84 irradiator for food and other products on pallets or in containers. Chemical dosimeters wre exposed in the facility under conditions of the typical plant operation. The choice of the dosimeter systems employed was based on the experience in chemical dosimetry gained over several years. Dose uniformity information was obtained in air, spices, bulbs, feeds, cosmetics, plastics and surgical goods. Most products currently irradiated require dose uniformity which can be efficiently provided by pallet or box irradiators like GBS 84. The radiation performance characteristics and some dosimetry procedures are discussed.

  3. Neutron Exposure Parameters for the Dosimetry Capsule in the Heavy-Section Steel Irradiation Program Tenth Irradiation Series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C.A. Baldwin; F.B.K. Kam; I. Remec

    1998-10-01

    This report describes the computational methodology for the least-squares adjustment of the dosimetry data from the HSSI 10.OD dosimetry capsule with neutronics calculations. It presents exposure rates at each dosimetry location for the neutron fluence greater than 1.0 MeV, fluence greater than 0.1 MeV, and displacements per atom. Exposure parameter distributions are also described in terms of three- dimensional fitting functions. When fitting functions are used it is suggested that an uncertainty of 6% (1 o) should be associated with the exposure rate values. The specific activity of each dosimeter at the end of irradiation is listed in the Appendix.

  4. Dosimetric Considerations in Radioimmunotherapy and Systemic Radionuclide Therapies: A Review

    PubMed Central

    Loke, Kelvin S. H.; Padhy, Ajit K.; Ng, David C. E.; Goh, Anthony S.W.; Divgi, Chaitanya

    2011-01-01

    Radiopharmaceutical therapy, once touted as the “magic bullet” in radiation oncology, is increasingly being used in the treatment of a variety of malignancies; albeit in later disease stages. With ever-increasing public and medical awareness of radiation effects, radiation dosimetry is becoming more important. Dosimetry allows administration of the maximum tolerated radiation dose to the tumor/organ to be treated but limiting radiation to critical organs. Traditional tumor dosimetry involved acquiring pretherapy planar scans and plasma estimates with a diagnostic dose of intended radiopharmaceuticals. New advancements in single photon emission computed tomography and positron emission tomography systems allow semi-quantitative measurements of radiation dosimetry thus allowing treatments tailored to each individual patient. PMID:22144871

  5. Nouvelle approche à la prise en charge des condylomes

    PubMed Central

    Lopaschuk, Catharine C.

    2013-01-01

    Résumé Objectif Faire le résumé des anciens et des nouveaux moyens de traitement des verrues génitales ou condylomes et déterminer comment les utiliser de manière appropriée. Sources des données Une recherche documentaire a été effectuée dans les bases de données suivantes: MEDLINE, PubMed, EMBASE, base de données des synthèses systématiques et registre central des études contrôlées de la Collaboration Cochrane (en anglais), ACP Journal Club et Trip. Les bibliographies des articles extraits ont aussi été examinées. Les études cliniques, les articles de révision qualitative, les rapports consensuels et les guides de pratique clinique ont été retenus. Message principal Les verrues symptomatiques sont présentes chez au moins 1 % des personnes âgées entre 15 et 49 ans et on estime que jusqu’à 50 % des gens sont infectés par le virus du papillome humain à un moment donné de leur vie. L’imiquimod et la podophyllotoxine sont 2 nouveaux traitements pour les verrues génitales externes qui sont moins douloureux et peuvent être appliqués par les patients à la maison. De plus, il a été démontré que le vaccin quadrivalent contre le virus du papillome humain est efficace pour prévenir les condylomes et le cancer du col. Les plus anciennes méthodes thérapeutiques ont aussi leur place dans certaines situations, comme les verrues intravaginales, urétrales, anales ou récalcitrantes ou encore pour les patientes enceintes. Conclusion Les nouveaux traitements des verrues génitales externes peuvent réduire la douleur causée par la thérapie et le nombre de visites au cabinet. Les autres méthodes thérapeutiques demeurent utiles dans certaines situations.

  6. Radiation risk assessment in neonatal radiographic examinations of the chest and abdomen: a clinical and Monte Carlo dosimetry study

    NASA Astrophysics Data System (ADS)

    Makri, T.; Yakoumakis, E.; Papadopoulou, D.; Gialousis, G.; Theodoropoulos, V.; Sandilos, P.; Georgiou, E.

    2006-10-01

    Seeking to assess the radiation risk associated with radiological examinations in neonatal intensive care units, thermo-luminescence dosimetry was used for the measurement of entrance surface dose (ESD) in 44 AP chest and 28 AP combined chest-abdominal exposures of a sample of 60 neonates. The mean values of ESD were found to be equal to 44 ± 16 µGy and 43 ± 19 µGy, respectively. The MCNP-4C2 code with a mathematical phantom simulating a neonate and appropriate x-ray energy spectra were employed for the simulation of the AP chest and AP combined chest-abdominal exposures. Equivalent organ dose per unit ESD and energy imparted per unit ESD calculations are presented in tabular form. Combined with ESD measurements, these calculations yield an effective dose of 10.2 ± 3.7 µSv, regardless of sex, and an imparted energy of 18.5 ± 6.7 µJ for the chest radiograph. The corresponding results for the combined chest-abdominal examination are 14.7 ± 7.6 µSv (males)/17.2 ± 7.6 µSv (females) and 29.7 ± 13.2 µJ. The calculated total risk per radiograph was low, ranging between 1.7 and 2.9 per million neonates, per film, and being slightly higher for females. Results of this study are in good agreement with previous studies, especially in view of the diversity met in the calculation methods.

  7. JADA: a graphical user interface for comprehensive internal dose assessment in nuclear medicine.

    PubMed

    Grimes, Joshua; Uribe, Carlos; Celler, Anna

    2013-07-01

    The main objective of this work was to design a comprehensive dosimetry package that would keep all aspects of internal dose calculation within the framework of a single software environment and that would be applicable for a variety of dose calculation approaches. Our MATLAB-based graphical user interface (GUI) can be used for processing data obtained using pure planar, pure SPECT, or hybrid planar/SPECT imaging. Time-activity data for source regions are obtained using a set of tools that allow the user to reconstruct SPECT images, load images, coregister a series of planar images, and to perform two-dimensional and three-dimensional image segmentation. Curve fits are applied to the acquired time-activity data to construct time-activity curves, which are then integrated to obtain time-integrated activity coefficients. Subsequently, dose estimates are made using one of three methods. The organ level dose calculation subGUI calculates mean organ doses that are equivalent to dose assessment performed by OLINDA/EXM. Voxelized dose calculation options, which include the voxel S value approach and Monte Carlo simulation using the EGSnrc user code DOSXYZnrc, are available within the process 3D image data subGUI. The developed internal dosimetry software package provides an assortment of tools for every step in the dose calculation process, eliminating the need for manual data transfer between programs. This saves times and minimizes user errors, while offering a versatility that can be used to efficiently perform patient-specific internal dose calculations in a variety of clinical situations.

  8. A TLD-based ten channel system for the spectrometry of bremsstrahlung generated by laser-matter interaction

    NASA Astrophysics Data System (ADS)

    Horst, Felix; Fehrenbacher, Georg; Radon, Torsten; Kozlova, Ekaterina; Rosmej, Olga; Czarnecki, Damian; Schrenk, Oliver; Breckow, Joachim; Zink, Klemens

    2015-05-01

    This work presents a thermoluminescence dosimetry based method for the measurement of bremsstrahlung spectra in the energy range from 30 keV to 100 MeV, resolved in ten different energy intervals and for the photon ambient dosimetry in ultrashort pulsed radiation fields as e.g. generated during operation of the PHELIX laser at the GSI Helmholtzzentrum für Schwerionenforschung. The method is a routine-oriented development by application of a multi-filter technique. The data analysis takes around 1 h. The spectral information is obtained by the unfolding of the response of ten thermoluminescence dosimeters with absorbers of different materials and thicknesses arranged as a stack each with a different response function to photon radiation. These response functions were simulated by the use of the Monte Carlo code FLUKA. An algorithm was developed to unfold bremsstrahlung spectra from the readings of the ten dosimeters. The method has been validated by measurements at a clinical electron linear accelerator (6 MV and 18 MV bremsstrahlung). First measurements at the PHELIX laser system were carried out in December 2013 and January 2014. Spectra with photon energies up to 10 MeV and mean energies up to 420 keV were observed at laser-intensities around 1019 W /cm2 on a titanium foil target. The measurement results imply that the steel walls of the target chamber might be an additional bright x-ray source.

  9. Direct megavoltage photon calibration service in Australia

    PubMed Central

    Ramanathan, G.; Oliver, C.; Cole, A.; Lye, J.; Harty, P. D.; Wright, T.; Webb, D. V.; Followill, D. S.

    2014-01-01

    The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) maintains the Australian primary standard of absorbed dose. Until recently, the standard was used to calibrate ionisation chambers only in 60Co gamma rays. These chambers are then used by radiotherapy clinics to determine linac output, using a correction factor (kQ) to take into account the different spectra of 60Co and the linac. Over the period 2010–2013, ARPANSA adapted the primary standard to work in megavoltage linac beams, and has developed a calibration service at three photon beams (6, 10 and 18 MV) from an Elekta Synergy linac. We describe the details of the new calibration service, the method validation and the use of the new calibration factors with the International Atomic Energy Agency’s TRS-398 dosimetry Code of Practice. The expected changes in absorbed dose measurements in the clinic when shifting from 60Co to the direct calibration are determined. For a Farmer chamber (model 2571), the measured chamber calibration coefficient is expected to be reduced by 0.4, 1.0 and 1.1 % respectively for these three beams when compared to the factor derived from 60Co. These results are in overall agreement with international absorbed dose standards and calculations by Muir and Rogers in 2010 of kQ factors using Monte Carlo techniques. The reasons for and against moving to the new service are discussed in the light of the requirements of clinical dosimetry. PMID:25146559

  10. Calculation of dose distribution in compressible breast tissues using finite element modeling, Monte Carlo simulation and thermoluminescence dosimeters

    NASA Astrophysics Data System (ADS)

    Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.

    2015-12-01

    Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5%  ±  5.9%.

  11. Calculation of dose distribution in compressible breast tissues using finite element modeling, Monte Carlo simulation and thermoluminescence dosimeters.

    PubMed

    Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S

    2015-12-07

    Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5%  ±  5.9%.

  12. Initial Nuclear Radiation Hardness Validation Test

    DTIC Science & Technology

    2008-11-03

    d. Dosimetry from the GDR environment. The TLDs should be placed as indicated in the section above and their location used to determine the... electronics to levels which will account for: all error terms in dosimetry and data recording, response differences in microcircuits due to different...the internal gamma dose environment of an LRU. d. Dosimetry from the gamma dose environment. The TLDs should be placed as indicated in the

  13. Effects of High Energy Electron Irradiation on a Yttrium Barium(2) Copper(3) Oxygen(7-delta) High Temperature Superconductor

    DTIC Science & Technology

    1991-09-01

    2 2. Dosimetry ............................................. 4 C. OVERVIEW OF EXPERIMENT............................... 5 11. ELECTRON BEAM...From these measurements, the dose was calculated and then compared to a measured dose obtained from TLD dosimetry . Technical 5 problems with the...LINAC precluded TLD dosimetry from being accomplished during the first run and, therefore, was performed on the second run only. After irradiation, a NaI

  14. Improvement and Analysis of the Radiation Response of RADFET Dosimeters

    DTIC Science & Technology

    1992-06-15

    TLD ), silicon p-i-n diode responses and silicon calorimetry (AWE Dosimetry Service). Intensive preparations were made by REM and the experiments were...SUB-GROUP dose: RADFET : tactical dosimetry silicon : metal-oxide- 0705 emiconductor (MOS) field effect transistor (FET) : silicon Idioxide space...1.1 Principle of a dosimetry system, based on the RADFET (radiation-sensitive field-effect transistor) (a) microscopic cross-section of chip (b) chip

  15. Characterization of the Radiological Environment at J-Village during Operation Tomodachi

    DTIC Science & Technology

    2013-02-01

    individual as compared to those for the helicopter crew members (Appendix A). 3.2.2. Other Relevant Dosimetry Results Thermoluminescent dosimeter ( TLD ...internal monitoring results are available for 14 of these individuals. External dosimetry data (EPD and TLD ) showed that the maximum recorded dose for an...Washington, DC. http://www.NNSAResponseData.net. Accessed December 7. USAFCRD (U. S. Air Force Center for Radiation Dosimetry ), 2011. Electronic Pocket

  16. AFRRI Reports, April-June 1990

    DTIC Science & Technology

    1990-07-01

    described in detail in the companion paper (4). In vivo dosimetry was done using Harshaw (Solon, Ohio) TLD -100 lith- ium fluoride thermoluminescent...provide replicate measurements. Two separate dosimetry tubes were developed (Fig. 1). The first contained 30 TLD cap- sules loaded in a 90-cm length...situ Dosimetry Tube 55 3 LIF TLDs In gelatin capsule TUBEB LIF TLDs Nylon Balls Steel Ball Epoxy Plug I Scale 3 cm - J FIG. I

  17. The Effect of Irradiation on Bone Remodelling and the Structural Integrity of the Vertebral Column

    DTIC Science & Technology

    1990-01-01

    thermoluminescent dosimetry calculations were also used. Seventy-four lithium fluoride thermoluminescent dosimeters ( TLDs ) were selected from 120...and thermoluminescent dosimetry ( TLD ) were used to evaluate the actual doses administered. The TLD analysis was completed with five strips of five...professional help with the dose administration and the dosimetry . And especially to my husband. Kevin, without whose help and encouragement I could not have

  18. [Automatic Extraction and Analysis of Dosimetry Data in Radiotherapy Plans].

    PubMed

    Song, Wei; Zhao, Di; Lu, Hong; Zhang, Biyun; Ma, Jun; Yu, Dahai

    To improve the efficiency and accuracy of extraction and analysis of dosimetry data in radiotherapy plans for a batch of patients. With the interface function provided in Matlab platform, a program was written to extract the dosimetry data exported from treatment planning system in DICOM RT format and exported the dose-volume data to an Excel file with the SPSS compatible format. This method was compared with manual operation for 14 gastric carcinoma patients to validate the efficiency and accuracy. The output Excel data were compatible with SPSS in format, the dosimetry data error for PTV dose interval of 90%-98%, PTV dose interval of 99%-106% and all OARs were -3.48E-5 ± 3.01E-5, -1.11E-3 ± 7.68E-4, -7.85E-5 ± 9.91E-5 respectively. Compared with manual operation, the time required was reduced from 5.3 h to 0.19 h and input error was reduced from 0.002 to 0. The automatic extraction of dosimetry data in DICOM RT format for batch patients, the SPSS compatible data exportation, quick analysis were achieved in this paper. The efficiency of clinical researches based on dosimetry data analysis of large number of patients will be improved with this methods.

  19. Time resolved dosimetry of human brain exposed to low frequency pulsed magnetic fields.

    PubMed

    Paffi, Alessandra; Camera, Francesca; Lucano, Elena; Apollonio, Francesca; Liberti, Micaela

    2016-06-21

    An accurate dosimetry is a key issue to understanding brain stimulation and related interaction mechanisms with neuronal tissues at the basis of the increasing amount of literature revealing the effects on human brain induced by low-level, low frequency pulsed magnetic fields (PMFs). Most literature on brain dosimetry estimates the maximum E field value reached inside the tissue without considering its time pattern or tissue dispersivity. Nevertheless a time-resolved dosimetry, accounting for dispersive tissues behavior, becomes necessary considering that the threshold for an effect onset may vary depending on the pulse waveform and that tissues may filter the applied stimulatory fields altering the predicted stimulatory waveform's size and shape. In this paper a time-resolved dosimetry has been applied on a realistic brain model exposed to the signal presented in Capone et al (2009 J. Neural Transm. 116 257-65), accounting for the broadband dispersivity of brain tissues up to several kHz, to accurately reconstruct electric field and current density waveforms inside different brain tissues. The results obtained by exposing the Duke's brain model to this PMF signal show that the E peak in the brain is considerably underestimated if a simple monochromatic dosimetry is carried out at the pulse repetition frequency of 75 Hz.

  20. A multicentre 'end to end' dosimetry audit for cervix HDR brachytherapy treatment.

    PubMed

    Palmer, Antony L; Diez, Patricia; Gandon, Laura; Wynn-Jones, Andrea; Bownes, Peter; Lee, Chris; Aird, Edwin; Bidmead, Margaret; Lowe, Gerry; Bradley, David; Nisbet, Andrew

    2015-02-01

    To undertake the first multicentre fully 'end to end' dosimetry audit for HDR cervix brachytherapy, comparing planned and delivered dose distributions around clinical treatment applicators, with review of local procedures. A film-dosimetry audit was performed at 46 centres, including imaging, applicator reconstruction, treatment planning and delivery. Film dose maps were calculated using triple-channel dosimetry and compared to RTDose data from treatment planning systems. Deviations between plan and measurement were quantified at prescription Point A and using gamma analysis. Local procedures were also discussed. The mean difference between planned and measured dose at Point A was -0.6% for plastic applicators and -3.0% for metal applicators, at standard uncertainty 3.0% (k=1). Isodose distributions agreed within 1mm over a dose range 2-16Gy. Mean gamma passing rates exceeded 97% for plastic and metal applicators at 3% (local) 2mm criteria. Two errors were found: one dose normalisation error and one applicator library misaligned with the imaged applicator. Suggestions for quality improvement were also made. The concept of 'end to end' dosimetry audit for HDR brachytherapy has been successfully implemented in a multicentre environment, providing evidence that a high level of accuracy in brachytherapy dosimetry can be achieved. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

Top